Sara Beery (sbeery@caltech.edu)
2019-08-09 13:16:51

@Sara Beery has joined the channel

Sunandan (sunchak@iu.edu)
2019-08-09 13:17:24

@Sunandan has joined the channel

Sara Beery (sbeery@caltech.edu)
2019-08-09 13:21:28

Hello! I hope this can be a good way for those interested in AI for conservation to stay connected and keep each other informed 🙂

👍 Siyu Yang, Elizabeth Bondi, Hartwig Adam, Sreejith Menon
💯 Jon Van Oast
Tanya Birch (tanyak@google.com)
2019-08-09 13:37:20

@Tanya Birch has joined the channel

Sara Beery (sbeery@caltech.edu)
2019-08-09 13:40:44

I've set up channels for #newpapers #upcomingevents and #news

Sara Beery (sbeery@caltech.edu)
2019-08-09 13:42:29

We can also set up channels for different topics within the space. For instance, #camera_traps 🐅

🦏 Stefan Schneider
gvanhorn (grv22@cornell.edu)
2019-08-09 13:42:43

@gvanhorn has joined the channel

gvanhorn (grv22@cornell.edu)
2019-08-09 13:43:27

Thanks @Sara Beery for starting this!

👍 Oisin Mac Aodha
Jennifer Marsman (jennmar@microsoft.com)
2019-08-09 13:50:23

@Jennifer Marsman has joined the channel

Siyu Yang (yasiyu@microsoft.com)
2019-08-09 13:53:53

@Siyu Yang has joined the channel

Jason Parham (bluemellophone@gmail.com)
2019-08-09 14:08:46

@Jason Parham has joined the channel

Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2019-08-09 14:26:58

@Jason Holmberg (Wild Me) has joined the channel

Jon Van Oast (jon@wildme.org)
2019-08-09 16:32:12

@Jon Van Oast has joined the channel

Stefan Schneider (sschne01@uoguelph.ca)
2019-08-09 19:52:36

@Stefan Schneider has joined the channel

Jonathan (jonathanhuang@google.com)
2019-08-09 20:19:50

@Jonathan has joined the channel

Elijah Cole (Deactivated) (ecole@caltech.edu)
2019-08-09 20:36:58

@Elijah Cole (Deactivated) has joined the channel

Saul Greenberg (saul@ucalgary.ca)
2019-08-09 21:53:44

@Saul Greenberg has joined the channel

Anh Nguyen (anhnguyen@auburn.edu)
2019-08-09 22:38:55

@Anh Nguyen has joined the channel

Elizabeth Bondi (ebondi@g.harvard.edu)
2019-08-10 05:11:20

@Elizabeth Bondi has joined the channel

Manish Rai (rai00007@umn.edu)
2019-08-10 07:04:10

@Manish Rai has joined the channel

Lily Xu (lily_xu@g.harvard.edu)
2019-08-10 09:57:19

@Lily Xu has joined the channel

Oisin Mac Aodha (macaodha@caltech.edu)
2019-08-10 17:50:28

@Oisin Mac Aodha has joined the channel

Christine Kaeser-Chen (christinech@google.com)
2019-08-11 14:40:42

@Christine Kaeser-Chen has joined the channel

Christine Kaeser-Chen (christinech@google.com)
2019-08-11 14:41:34

thanks @Sara Beery for setting this up! 😄

Tanya Berger-Wolf (tanya@wildme.org)
2019-08-11 14:52:16

@Tanya Berger-Wolf has joined the channel

Sara Beery (sbeery@caltech.edu)
2019-08-12 10:23:45

Cool new planet money episode! https://www.npr.org/2019/08/09/749938354/episode-932-deep-learning-with-the-elephants

NPR.org
🐘 Lily Xu, Hartwig Adam
👍 Thomas Starnes
Sara Beery (sbeery@caltech.edu)
2019-08-12 10:32:06

*Thread Reply:* Here's AI for Earth's article about the project: https://news.microsoft.com/on-the-issues/2018/08/09/can-sound-help-save-a-dwindling-elephant-population-scientists-using-ai-think-so/

On the Issues
👍:skin_tone_2: Manish Rai
Jon Van Oast (jon@wildme.org)
2019-08-12 13:48:01

*Thread Reply:* great links. listening to the episode now. on the topic of elephants, here is an article about a project we worked on with vulcan, which uses cv to count elephants, if anyone is interested.

https://engineering.vulcan.com/2018_1112_How-many-elephants-are-there.aspx

👍 Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2019-08-15 12:06:59

Thanks @Sara Beery for getting this going.

Ben Weinstein (benweinstein2010@gmail.com)
2019-08-15 12:07:51

Hi all, I'm Ben, some of you know me, for everyone else, I work on deep learning for airborne tree detection (demo:http://treedetection.westus.cloudapp.azure.com/shiny/apps/TreeDemo/)

🌲 Sara Beery
👍 Siyu Yang, Jon Van Oast, Lily Xu
Ben Weinstein (benweinstein2010@gmail.com)
2019-08-21 19:41:49

*Thread Reply:* moved to here: http://tree.westus.cloudapp.azure.com/shiny/

Sara Beery (sbeery@caltech.edu)
2019-08-15 12:16:05

Intros are a great idea Ben! I'm Sara, I work on species recognition in camera traps, trying to figure out how to train models that generalize well and can handle rare or previously unseen species. I published the Caltech Camera Traps dataset (https://beerys.github.io/CaltechCameraTraps/), run the iWildCam competition for FGVC at CVPR (https://www.kaggle.com/c/iwildcam-2019-fgvc6), trained the Megadetector for Microsoft AI for Earth (https://github.com/microsoft/CameraTraps), and am currently interning at Google working with Wildlife Insights(https://wildlifeinsights.org/home-0). Really happy to see so many interested people on this slack!

beerys.github.io
kaggle.com
GitHub
👍 Jon Van Oast, Lily Xu
Siyu Yang (yasiyu@microsoft.com)
2019-08-15 13:26:22

Hi guys! I’m Siyu and I work at the AI for Earth program at Microsoft. I’ve been mostly working on camera traps too: curating datasets from partners (the public ones are on http://lila.science), adding training data to and operationalizing the Megadetector that Sara trained, and making batch inference using the Megadetector available to organizations through an API (https://github.com/microsoft/CameraTraps/tree/master/api/batch_processing). I’ve also done some projects using satellite images, and am learning to use Pangeo. I’m a part of a Sustainability Garage at Microsoft and we have hackathon projects with NGOs such as The Ocean Cleanup (detecting plastic debris in rivers).

LILA BC
🐅 Sara Beery, Lily Xu
👍 Jon Van Oast, Bourhan
Bourhan (bourhan@rfcx.org)
2019-08-15 14:00:57

Hi Everyone. My name is Bourhan and I am part of a non-profit organization called Rainforest Connection. We work on using acoustic data streamed from rainforests around the world to detect sounds of Chainsaws, Vehicles, Gunshots, etc to fight against illegal logging and poaching. We have also started working on a bio-acoustic platform to aid in the detection of a variety of animal species using AI. Looking forward to chatting with everyone.

Here's a video Google did about our work in Brazil: https://www.youtube.com/watch?v=Lbn6kVlFaSQ

🌳 Sara Beery, Lily Xu
❤️ Siyu Yang
Lily Xu (lily_xu@g.harvard.edu)
2019-08-15 14:01:09

Hello! My name is Lily and I’m a PhD student at Harvard working on applications of AI and ML to wildlife conservation, specifically combating illegal wildlife poaching through more intelligent ranger patrols. Here’s a recent video highlighting our project: https://www.youtube.com/watch?v=85bRbCcwiNg

YouTube
} USCViterbi (https://www.youtube.com/user/USCViterbi)
🐘 Sara Beery, gvanhorn, Sreejith Menon
👍 Bourhan, Siyu Yang, gvanhorn
Jon Van Oast (jon@wildme.org)
2019-08-15 14:35:28

hi everyone -- great intros; such an interesting group! i am jon van oast and i am a developer at wild me, a 501(c)3 non-profit in the u.s. which maintains open source software for wildlife conservation. our work centers around wildbook, which is a platform for collecting and analyzing wildlife data, based mostly on photo ID of individual animals. we use ai/ml in terms of both computer vision (detection, identification, etc), as well as some other uses, like machine translation and analysis of social media (to look for wildlife photos). https://wildme.org | https://wildbook.org | https://github.com/WildbookOrg/Wildbook

GitHub
:zebra_face: Sara Beery, Siyu Yang, Lily Xu, Sreejith Menon
Amy Panikowski (aepanikowski@gmail.com)
2019-08-16 05:38:02

Hi Everyone! I see that I'm in a group with a bunch of amazing people with awesome skills! I'm Amy and I'm a freelance scientist and independent consultant for international development work. I'm a biologist and geographer living in South Africa. I'm freelancing at the moment as obtaining work in this country is extremely difficult until I can obtain permanent residence status. I'm a part of the Mountain Goat Molt Project and we have an AI grant for it. I've done all the hand-processing work while my colleagues (like Sara!) are focusing on the ML. I've done camera trapping in the past on a community-owned game reserve in Zululand. When I'm not freelancing, I'm focusing on education and short-distance relocation of snakes in our rural area. I'm a happy generalist who enjoys learning new skills as opportunities arise. I'm really interested in AI for conservation and am learning all that I can. Great to be here with you all!

🐐 Sara Beery
💚 Katarzyna Nowak
Ștefan Istrate (stefan.istrate@gmail.com)
2019-08-16 08:07:17

Great to hear about so many interesting people and projects! I'm Ștefan and I work as a software engineer at Google in London, recently joining the team working with Wildlife Insights (https://wildlifeinsights.org). I have a great interest in using technology for wildlife conservation (and any other project dealing with environmental issues, really), and I have almost 6 years of experience in the machine learning space. I am passionate about the outdoors and when I'm not in front of the computer I am a nature photographer (https://www.stefanistrate.com/).

stefanistrate.com
📷 Sara Beery
gvanhorn (grv22@cornell.edu)
2019-08-16 08:39:08

*Thread Reply:* Awesome photos!

Ștefan Istrate (stefan.istrate@gmail.com)
2019-08-16 08:41:22

*Thread Reply:* Thanks 🙂

Sara Beery (sbeery@caltech.edu)
2019-08-16 10:31:34

*Thread Reply:* These are gorgeous!

Ștefan Istrate (stefan.istrate@gmail.com)
2019-08-16 11:43:57

*Thread Reply:* Thank you, @Sara Beery!

gvanhorn (grv22@cornell.edu)
2019-08-16 08:25:23

Hello folks! It’s great to be in such good company! I’m Grant and I recently finished my phd at Caltech (same lab as Sara) during which I built the computer vision components for Merlin Bird ID, iNaturalist, and Seek by iNaturalist. I’m now a researcher at the Cornell Lab of Ornithology where I’ll be trying to make Merlin smarter, crunching through eBird data, and building tools to analyze the images, videos and audio recordings in the Macaulay Library. I’m passionate about building applications that enable people to tap into expert knowledge while exploring the wildlife around them. Cheers!

🌿 Lily Xu, Sara Beery, Oisin Mac Aodha
👍 Bourhan, Amy Panikowski
Lily Xu (lily_xu@g.harvard.edu)
2019-08-16 08:33:03

*Thread Reply:* Neat!! My friend just showed me the app Seek this weekend — very impressive vision work and so fun to use!

gvanhorn (grv22@cornell.edu)
2019-08-16 08:38:12

*Thread Reply:* Awesome! Thank you! Lots of room for making it better, but off to a reasonable start 😃

Dave Thau (thau@wwf.org)
2019-08-16 16:47:46

Greetings! I'm Dave Thau, currently working at WWF on their global science team. I've been doing biodiversity-related computer stuff for about 20 years now, with a focus on data management, remote sensing, and yes, AI. Nice to be here!

🤩 Jon Van Oast, Sara Beery
👍 Siyu Yang, Bourhan, Lily Xu
Ethan White (ethan.white@weecology.org)
2019-08-19 08:30:48

👋 I'm Ethan, I work at the University of Florida and am working on @Ben Weinstein on detecting and classing trees over large scales using remote sensing. http://treedetection.westus.cloudapp.azure.com/shiny/apps/TreeDemo/ Very excited to be part of these conversations!

🌲 Sara Beery, Siyu Yang, Lily Xu, Amy Panikowski
Ben Weinstein (benweinstein2010@gmail.com)
2019-08-21 19:41:29

*Thread Reply:* just updating, due to a small bit of housecleaning, the demo now lives here: http://tree.westus.cloudapp.azure.com/shiny/

Amy Panikowski (aepanikowski@gmail.com)
2019-09-06 04:31:34

*Thread Reply:* Go Gators!🐊

Elizabeth Bondi (ebondi@g.harvard.edu)
2019-08-19 13:32:43

Hi everyone! Great to meet you all and be involved - thank you! My name is Liz Bondi and I’m a PhD student at Harvard advised by Prof. Milind Tambe. I’ve been working on using drones to protect wildlife from poaching, including by detecting poachers and animals automatically from thermal infrared drone video, and planning paths for the drones. I can’t wait to talk with everyone!

🛩️ Sara Beery, Lily Xu
Ben Weinstein (benweinstein2010@gmail.com)
2019-08-20 12:37:30

*Thread Reply:* Hey Liz, That was a nice talk I saw at KDD, sorry you weren’t there.

Elizabeth Bondi (ebondi@g.harvard.edu)
2019-08-20 19:14:06

*Thread Reply:* Thank you, Ben! I’m also sorry I wasn’t there. I would have really liked to hear more about your work, but I hope to catch you at the next gathering!

Sara Beery (sbeery@caltech.edu)
2019-08-19 13:38:21

Join the #upcoming_events channel to hear about upcoming opportunities! @Dave Thau just posted a cool opportunity for a "mini-conference around the topic of counting plants, animals and other objects on aerial, satellite and other high resolution imagery"

Dave Thau (thau@wwf.org)
2019-08-19 13:53:41

Not sure what channel to post this on, so here it is! WWF Netherlands has put out a request for proposals relating to predicting tree cover loss. The link at Wildlabs.net gives a good overview. Please forward around or suggest to me places where I can post this.

https://www.wildlabs.net/resources/careers/request-proposals-deforestation-early-warning-system-wwf

👍 Jon Van Oast, Sara Beery, Sreejith Menon
Jon Van Oast (jon@wildme.org)
2019-08-19 13:55:27

*Thread Reply:* maybe we need an RFP channel ?

Sara Beery (sbeery@caltech.edu)
2019-08-19 13:56:36

*Thread Reply:* Jon, sounds great to me! Feel free to start one and post about it on the general channel so people can join 🙂

💯 Jon Van Oast
Saket Anand (anands@iiitd.ac.in)
2019-08-20 02:15:49

Hello everyone! Glad to be part of this exciting group! I am Saket Anand, a faculty member at IIIT-Delhi, India. I am a Computer Vision person interested in applications to conservation. I have been working on species detection and re-identification of tigers in India. Some of you may already know that India does a tiger census survey every four years. The latest one from 2018 involved 15000 camera traps deployed over 26000 different locations. Looking forward to more interactions!

👍 Siyu Yang, Ethan White, Elizabeth Bondi, Nicole Egna, Sreejith Menon
🐅 Sara Beery, Manish Rai, Lily Xu
Stefan Schneider (sschne01@uoguelph.ca)
2019-08-26 21:52:10

Hello everyone! What an excited group to be a part of! My name is Stefan Schneider and I'm a PhD student at the University of Guelph, Canada. My research focuses primarily on Similarity Comparison Networks and one-shot learning for animal individual re-identification from camera traps. I've had successful results for Chimpanzees, Humpback Whales, Fruit Flies and Octopus thus far. With further research, I think it can be a methodology that can provide ecological metrics, such as population density estimates, in realtime.

🐙 Sara Beery
🐳 Jon Van Oast, Jason Holmberg (Wild Me), Lily Xu
🙊 Siyu Yang
👍:skin_tone_2: Manish Rai, Bourhan
Bourhan (bourhan@rfcx.org)
2019-08-27 10:30:09

Hi Everyone... Does anyone here works primarily (or part of their work) on Bioacoustics? I'd love to hear more about your work...

Oisin Mac Aodha (macaodha@caltech.edu)
2019-08-30 13:16:41

*Thread Reply:* Hey @Bourhan, Dan Stowell is a great contact in this space. I've worked on bat species detection and classification and audio in the past and would be happy to chat more. https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005995

Ben Weinstein (benweinstein2010@gmail.com)
2019-08-30 10:16:05

no, but I had a meeting with Dan Stowell (http://www.mcld.co.uk/research/) who was doing some great work. worth getting in touch with.

mcld.co.uk
👍 Sara Beery, Bourhan, Oisin Mac Aodha
MarconiS (sergio.marconi@weecology.org)
2019-09-05 17:25:47

Hi! I am Sergio, PhD student at University of Florida, learning from @Ethan White and @Ben Weinstein in some of their projects! My research focuses primarily on predicting traits for individual tree objects from a combination of "little high quality - big lower quality" field data and remote sensing. Super excited to be part of these conversations!

🌲 Lily Xu, Ethan White, Sara Beery
Amy Panikowski (aepanikowski@gmail.com)
2019-09-06 04:32:35

*Thread Reply:* Go Gators!🐊

🐊 Ethan White, MarconiS, Sara Beery
Sara Beery (sbeery@caltech.edu)
2019-09-09 17:04:24

Hello everyone!

We (@Stefan Schneider & @Sara Beery) are considering putting together a WACV workshop proposal focusing on Individual Re-ID in images, video, and acoustics. Individual Re-ID involves using a range of methods, from simple statistics, to advanced methods in machine learning, which are used to re-identify an animal individual previously recorded within a database. Here are three fantastic examples, covering the three data sources: Image: https://www.researchgate.net/publication/320609526_Wildbook_Crowdsourcing_computer_vision_and_data_science_for_conservation, Video: https://advances.sciencemag.org/content/5/9/eaaw0736, and Audio: https://royalsocietypublishing.org/doi/full/10.1098/rsif.2018.0940.

We want to get an idea for the community's interest, and get feedback on what would make this a valuable, worthwhile workshop. Please let us know if you have suggestions for speakers, topics, or good work in the field that we should try to include!

Science Advances
🐵 Stefan Schneider, Oisin Mac Aodha
🐦 Stefan Schneider
👍 Jon Van Oast, Lily Xu
Jon Van Oast (jon@wildme.org)
2019-09-09 17:43:37

*Thread Reply:* thanks for posting! (and thanks for the link to our paper!) ... i will see if anyone from the wildbook team is going to be attending.

Stefan Schneider (sschne01@uoguelph.ca)
2019-09-09 17:07:30

We'd be super excited with anyone coming on board!

Sara Beery (sbeery@caltech.edu)
2019-09-09 19:11:59

The previous invite link for this slack expired after a month, here's a new one set to never expire: https://join.slack.com/t/aiforconservation/shared_invite/enQtNzM1ODMwMDY2MDY3LTEzM2QxNDJiOWQ2MzYyZmQ0YjgxZGY4N2MzMmNiOTY3MTIxNjliYTA5ZjM4NzZhYWY1YjVkNTQ0MjNkZTUxOTU

Please feel free to invite anyone, I want this to be open to the community 🙂

🙏 Jon Van Oast
👍 Siyu Yang
Sreejith Menon (smenon59@bloomberg.net)
2019-09-11 09:06:18

Hi I am Sreejith. I was Prof. Berger-Wolf's student and my thesis was about the use of social media images to predict population of endangered species.

Currently, I am a software engineer at Bloomberg working on semantic matching of unstructured documents. I also work very closely with Bloomberg Data for Good Exchange and I would be more than happy to talk to any of you for panel/workshop/paper proposals at D4GX 2020. More details about this year's D4GX here: www.d4gx.com

:giraffe_face: Sara Beery, Lily Xu, Sayali kulkarni
👍 Siyu Yang
👋 Jon Van Oast
Sreejith Menon (smenon59@bloomberg.net)
2019-09-11 09:07:34

This is a great and a diverse group. Looking forward to connecting and talking to all of you! :zebra_face:

👍 Oisin Mac Aodha, Sara Beery
Sayali kulkarni (sayali.kulkarni@gmail.com)
2019-09-15 11:04:17

Great channel! Thanks for starting this @Sara Beery. Hello everyone, this is Sayali. I have been working on the species identification models for www.wildlifeinsights.org as my side project at Google research where my main area of work is language understanding for conversation based recommendations.

🐅 Sara Beery, Siyu Yang
Riccardo Pressiani (riccardo.pressiani@mail.polimi.it)
2019-09-16 18:29:57

Hi everyone! I’m Riccardo. Just like @Sreejith Menon, I was Prof. Berger-Wolf’s student. My research work was about developing a sensor system to track wild baboons behavior. We build both the tracking collars and bracelet, and the machine learning framework to analyze the data and extract information about the behaviors. The research project was a collaboration among the University of Illinois at Chicago, UC Davis and Politecnico di Milano (Italy).

I recently co-founded Wepo Inc. (https://wepo.io), a software consulting company with a focus on AI and Machine Learning. We’re looking forward to continuing our projects and effort in the field of environment and conservation.

Really happy to join you and to be in this community!

wepo.io
🐒 Sara Beery, Siyu Yang, Lily Xu
👍 Jon Van Oast
Alexis Joly (alexis.joly@inria.fr)
2019-09-23 03:33:24

Hi everyone, I'm Alexis. I'm the scientific coordinator of the Pl@ntNet project (https://plantnet.org/en/) and of the LifeCLEF challenges (http://www.lifeclef.org) since 2011. I'm very glad to join this exciting channel.

Pl@ntNet
👍 Oisin Mac Aodha, Sara Beery, Laurel Hopkins, Siyu Yang
👋 Jon Van Oast
Ben Weinstein (benweinstein2010@gmail.com)
2019-09-23 12:50:12

*Thread Reply:* Welcome @Alexis Joly, i’m a big fan of the lifeCLEF work!

Sara Beery (sbeery@caltech.edu)
2019-09-23 16:10:58

*Thread Reply:* Awesome to have you here!

Sara Beery (sbeery@caltech.edu)
2019-10-03 11:48:13

Hey everyone! If you haven't already, join the #upcoming_events channel for info on AI for Conservation workshops, symposiums, and more 🙂

👍 Lily Xu
Sara Beery (sbeery@caltech.edu)
2019-10-25 02:54:01

I spoke at the BiodiversityNext Conference this week (https://biodiversitynext.org/), along with @Dave Thau, Serge Belongie, and many others. There were 4 different Machine Learning sessions, all of which were so full they were standing room only, and I was impressed by the number of awesome ML applications and methods being considered. There were lots of image ID, audio ID, and NLP projects, lots of need for ML expertise, and there wasn't a lot of existing crossover with this community. I'm hoping to connect these communities moving forward!

Here are few cool demos that were released for the "Advancing biodiversity research through artificial intelligence" session:

You can try out the Microsoft AI for Earth MegaDetector on TFHub here: https://overlay.sandbox.google.com/embed?overlay_name=megadetector_v3

You can try the first published AI model from the partnership between GBIF and Visipedia here: https://overlay.corp.google.com/?overlay_name=mushroom_recognizer_v2 , this model was built from the svampeatlas (http://www.svampeatlas.dk/) Fungi dataset from last year's FGVCx Fungi challenge.

Wildlife Insights now has a public "Try AI" page where you can test out their model: https://www.wildlifeinsights.org/try-ai

biodiversity_next
Oisin Mac Aodha (macaodha@caltech.edu)
2019-10-26 00:27:44

*Thread Reply:* Very cool. Any pointers to the NLP projects?

Sara Beery (sbeery@caltech.edu)
2019-10-26 02:23:21

*Thread Reply:* Most of the people I talked to were working on OCR/NLP for old museum specimens and documents. I'll see if I can find a contact.

Sara Beery (sbeery@caltech.edu)
2019-10-27 20:57:24

*Thread Reply:* @Laurens Hogeweg or @Mike Trizna can you point to an OCR/NLP project for museum specimens? I know a few were mentioned.

Laurens Hogeweg (laurens.hogeweg@naturalis.nl)
2019-10-28 06:12:38

*Thread Reply:* I think OCR for printed text is more or less solved. You could use object detection to find the labels in a specimen and then perform OCR.

Laurens Hogeweg (laurens.hogeweg@naturalis.nl)
2019-10-28 06:12:45

*Thread Reply:* For handwriting I know only this one: https://www.universiteitleiden.nl/onderzoek/onderzoeksprojecten/wiskunde-en-natuurwetenschappen/making-sense-of-illustrated-handwritten-archives

Universiteit Leiden
👍 Oisin Mac Aodha
Sara Beery (sbeery@caltech.edu)
2019-10-28 08:58:07

*Thread Reply:* That's what I was thinking of! Thanks 🙂

Laurens Hogeweg (laurens.hogeweg@naturalis.nl)
2019-10-25 09:07:14

Hi, all, was just invited to this Slack channel by @Sara Beery and was one of the speakers in the AI sessions at BiodiversityNext

Laurens Hogeweg (laurens.hogeweg@naturalis.nl)
2019-10-25 09:07:48

I work on building large-scale biodiversity identification models using deep learning at Naturalis Biodiversity Center

🌿 Sara Beery, Lily Xu
👍 Siyu Yang
Laurens Hogeweg (laurens.hogeweg@naturalis.nl)
2019-10-25 09:08:49

And applying them in several practical applications related to citizen science and monitoring

wp (wp@xeno-canto.org)
2019-10-25 11:12:55

hello group, thanks for the invite! As an introduction: Together with Bob Planque I started www.xeno-canto.org ("XC") back in 2005 and ever since I have been looking after the site & the sound recordings it holds. Together with a dynamic community. It's been very rewarding. Perhaps you've heard of us. XC have been a keen supporter of development in sound recognition over the last couple of years, mainly by supplying the bulk of the recordings for BirdClef. We are looking forward to one day implement some clever uses for sound recognition tools in the site.

🐥 Sara Beery, Oisin Mac Aodha, Siyu Yang
🎶 Sara Beery
Ben Koger (benkoger@gmail.com)
2019-10-28 02:26:51

Hi everyone, I just joined the channel. I'm a PhD student studying biology at the max planck institute of animal behavior in Konstanz Germany. My main project involves designing drone and computer vision based systems to study herds of ungulates in Kenya (project website: herdhover.com). I'm also about to begin a mini project doing an automatic camera based bat census at Kasanka National Park in Zambia. I'm excited to join this channel!

👍 Oisin Mac Aodha, Sara Beery
🦌 Sara Beery, Lily Xu
🦇 Siyu Yang
Wilfried Wöber (wilfried.woeber@technikum-wien.at)
2019-10-28 02:44:53

Hi! I presented at biodiversity_next (this annoying non-CNN presentation). Currently, I am working at BOKU Wien using unsupervised learnign for novelty detection and UAS Technikum Wien in the "Digital Factory" Robotics Lab. Iam really looking forward for new applications!

🙂 Sara Beery, Laurens Hogeweg
👍 Jon Van Oast, Siyu Yang
Sara Beery (sbeery@caltech.edu)
2019-10-28 23:02:06

For everyone new who is joining: there are channels for #upcomingevents (this includes workshops, seminars, etc.) #newpapers (for people to post their published work) #news (where you can post media, articles, blogs,...)

If anyone wants to start a channel for a specific topic, feel free! For example, if you work with camera traps you can join our #camera_traps channel 🙂

🎉 Jon Van Oast
Sara Beery (sbeery@caltech.edu)
2019-10-29 02:05:19

*Thread Reply:* And please introduce yourselves!

Frederic (frederic@apic.ai)
2019-10-30 06:45:13

Hi everyone, I am the Co-Founder of www.apic.ai We preserve biodiversity with the help of honey bees as biosensors by using some computer vision techniques to analyze the health of the bees in order to use them as a proxy for the health of all pollinators in the environment.

We are based in Karlsruhe, Germany, so if anyone is in the area feel free to drop by for a coffee. Furthermore, if any one of you is currently at ICCV too, I would love to grab some lunch or coffee with you. 🙂

More info about us: https://www.apic.ai , https://about.google/stories/save-the-bees/

I am excited to join this slack 🙂

apic.ai
😎 Lloyd Hughes
🐝 Sara Beery, gvanhorn, Siyu Yang, Sam Kelly
😃 Henrik Cox (Sentinel)
🎉 Jon Van Oast
Wilfried Wöber (wilfried.woeber@technikum-wien.at)
2019-10-31 03:24:55

@Frederic looks cool - are there any publications? Looks like YOLOv3 🙂

Frederic (frederic@apic.ai)
2019-10-31 05:14:27

*Thread Reply:* Hi Wilfried, here is our cs related publication: http://openaccess.thecvf.com/content_ICCVW_2019/papers/CVWC/Marstaller_DeepBees_-_Building_and_Scaling_Convolutional_Neuronal_Nets_For_Fast_ICCVW_2019_paper.pdf

We are using SSD for the pollen detection, but plaing to switch to tinyYOLO in the future, when DeepBees goes on device.

For BeeDetection in general, we use something similar, to what K. Bozek, L. Hebert, A. S. Mikheyev, and G. J. Stephens. Towards dense object tracking in a 2d honeybee hive. used. But everything should be in the paper 😉

Is their a particular reason why you are interested?

Wilfried Wöber (wilfried.woeber@technikum-wien.at)
2019-10-31 07:29:20

*Thread Reply:* Hi Frederic, cool - thank you! I used to be a data scientist and work as a PHd student for 1,5 years with Harald Meimberg (Univ. of natural ressources & life sciences) in Vienna. We had similar ideas. I am really interested in models behind solutions but working with completely different models (GPLVMs) for image analysis.

Frederic (frederic@apic.ai)
2019-11-01 04:20:27

*Thread Reply:* That sounds great. If you think he is still interested to collaborate on this topic, it would be awesome if you can connect me with Harald Meimberg.

What kind of ideas did you have?

Wilfried Wöber (wilfried.woeber@technikum-wien.at)
2019-11-08 01:37:46

*Thread Reply:* Sorry for my delay (I was at a conference). I am working on novel models for object classification using nearly 0 parameters (e.g.: layers or number of neurons), which are extracting separated features first and do the classification afterwards to decrease the number of needed training exampels

I will chat with Harald Meimber aboud this topic

Sara Beery (sbeery@caltech.edu)
2019-11-07 18:37:03

@Dave Thau put together a trip report from the BiodiversityNext conference, with input from Anouk van Stokkom and me: https://docs.google.com/document/d/11_eOrpVaE7xEDFzwRL2tz_QvcPZm-52c8ezrjDA4Qok/edit?usp=sharing

🎉 Jon Van Oast, gvanhorn, Frederic, Ben Koger
🦉 Subhransu Maji
Silvia Zuffi (silvia.zuffi@tue.mpg.de)
2019-11-11 13:14:29

Hello everybody, my name is Silvia Zuffi, my topic is 3D animal (quadruped) modeling, but I have a question for a new thing that I want to explore not related to quadrupeds. I want to record and analyze/detect the voice of fishes. We have done some initial recordings, and well, it is hard to say what we got. I was wondering if among the people here there is somebody that has experience in capturing underwater sound. Thank you!

🐟 Sara Beery, Stefan Schneider, Frederic
🎤 Stefan Schneider, Sara Beery
Lily Xu (lily_xu@g.harvard.edu)
2019-11-11 13:15:54

*Thread Reply:* I don't know much about it, but there's been some work done on passive acoustic monitoring of whales that deals specifically with underwater sound! Here's an example of a recent paper: https://besjournals.onlinelibrary.wiley.com/doi/10.1111/2041-210X.13244

Sara Beery (sbeery@caltech.edu)
2019-11-11 13:16:39

*Thread Reply:* @Tanya Birch could you connect Silvia to the people in Geo working on whale audio? It seems like a good place to start 🙂

👍 Jon Van Oast
Siyu Yang (yasiyu@microsoft.com)
2019-11-11 19:51:45

*Thread Reply:* Someone on my team worked with NOAA Fisheries to make classifiers to help processing their underwater recordings of beluga whale calls. I imagine these are quite different from fish voice, but if you’d like, I can introduce you to the NOAA people who ran the project? Feel free to email me (yasiyu@microsoft.com) with a brief description of your project goals.

❤️ Sara Beery
Silvia Zuffi (silvia.zuffi@tue.mpg.de)
2019-11-12 00:04:42

*Thread Reply:* Thank you very much!

Hemal Naik (hnaik@ab.mpg.de)
2019-11-12 03:25:51

*Thread Reply:* Hi Silvia, We (Max Planck Institute of Animal Behavior) have some people in our lab with good idea of recording underwater data, we also have some data on fishes that communicate underwater (e.g. Danionella). Please email me (hnaik@ab.mpg.de) and I can put you in touch with some biologists in our lab.

Jon Van Oast (jon@wildme.org)
2019-11-12 17:48:14

*Thread Reply:* @Silvia Zuffi there is a group on this topic over on wildlabs. might be worth checking out? (and there are probably other threads on there too)

https://www.wildlabs.net/community/group/acoustic-monitoring

WILDLABS.NET
👍 Sara Beery
Jon Van Oast (jon@wildme.org)
2019-11-12 17:48:52

*Thread Reply:* (e.g. search hydrophones perhaps?)

🙂 Silvia Zuffi
Dan Morris (agentmorris@gmail.com)
2019-11-19 19:25:12

Hey everyone... this crowd is likely interested in a machine learning competition we just launched around Snapshot Serengeti data. Our hypothesis is that because we've done a bunch of infrastructure work to make sure competitor code runs on unseen test data, we will get models that are more generalizable to new data. We look forward to your submissions! https://aiforearth.drivendata.org/

aiforearth.drivendata.org
❤️ Sara Beery, Elizabeth Bondi, Elijah Cole (Deactivated), Caleb Robinson, Amrita Gupta, Ethan White
👍 Jon Van Oast, Siyu Yang, Oisin Mac Aodha, gvanhorn, Caleb Robinson, Amrita Gupta, Talia Speaker
Ben Weinstein (benweinstein2010@gmail.com)
2019-11-19 19:58:58

*Thread Reply:* @Dan Morris, how does this relate to what @Sara Beery ran last year? Similar topic, different dataset? Be interesting to cross train from there.

Dan Morris (agentmorris@gmail.com)
2019-11-19 20:00:39

*Thread Reply:* Related in the sense that it's all "ML for camera traps", different in (a) task (Sara's iWildCam competition focused on domain transfer, which is hard and necessarily requires some coarsening of the categories), (b) data set, and (c) competition structure (this one has competitors submitting code rather than results).

✔️ Jon Van Oast
👍 Ben Weinstein
Sara Beery (sbeery@caltech.edu)
2019-12-09 14:11:21

Hey everyone! Just a friendly reminder that the AI for Animal Re-ID workshop deadline is December 15th! Let us know if you have any questions, we want this workshop to be inclusive, useful, and hopefully kick-start some cool new collaborations :) https://sites.google.com/corp/view/wacv2020animalreid/

🦌 Oisin Mac Aodha
🐆 Stefan Schneider
🎉 Jon Van Oast
Nathaniel Rindlaub (nathaniel.rindlaub@tnc.org)
2019-12-09 16:58:05

Thanks @Sara Beery! Where will the workshop be held?

Sara Beery (sbeery@caltech.edu)
2019-12-09 16:59:31

*Thread Reply:* In Aspen, Colorado as part of WACV2020: https://wacv20.wacv.net/

👍 Nathaniel Rindlaub
Sara Beery (sbeery@caltech.edu)
2019-12-16 12:21:45

*Thread Reply:* See below, we're opening it up to remote submissions :)

Stefan Schneider (sschne01@uoguelph.ca)
2019-12-16 11:57:32

Hi everyone!

Due to interest from groups who expressed that they may not be able to attend in person, we have decided to open up the Animal Re-Identification Workshop during WACV2020 to remote submissions. If accepted, these submissions will be invited to send in a short video which will be shown on the day of the workshop. As a result of this, we are announcing that we will be extending the deadline to 11:59pm PST December 22th, 2019. If you have any questions please reach out to the primary organizers Stefan and Sara found on the website.

The workshop website is at https://sites.google.com/view/wacv2020animalreid/home

🐒 Sara Beery, Oisin Mac Aodha
💯 Sara Beery, Siyu Yang
👍 Jon Van Oast, Talia Speaker
Sara Beery (sbeery@caltech.edu)
2020-01-18 04:03:23

There is a new tenure track professor position opening in machine learning in the Center for the Advanced Study of Collective Behaviour at the University of Konstanz! https://academicpositions.de/ad/university-of-konstanz/2020/tenure-track-professorship-of-machine-learning/139409

academicpositions.de
Ben Koger (benkoger@gmail.com)
2020-01-18 16:43:40

*Thread Reply:* Thanks for sharing! If anyone has any questions about the position, department, or Konstanz in general feel free reach out to me or @Hemal Naik

🙌 Sara Beery
👍 Hemal Naik, Jon Van Oast
Ben Weinstein (benweinstein2010@gmail.com)
2020-01-24 23:08:39

Hi all, posting a pre-hiring announcement here instead of #jobs because there are more people. I just got news that ETH Swiss Data Science Center will be funding our proposal on biological object detection. We will be hiring a postdoc and a technician, as well as soliciting datasets for testing. The project covers a range of challenges include designing deep learning pipelines for animal detection in time-lapse video, integrating fine-grained classification and domain knowledge for species prediction, and online training for a web platform for fine-tuning pretrained models in the cloud. Thanks to @Sara Beery , @Siyu Yang, @Tanya Berger-Wolf for contributing feedback on the project idea at KDD, and @Benjamin Kellenberger for inspiring the later part with his very cool web app demo. Look for a job ad in the next month. Happy to answer questions, or brainstorm how we can best serve this growing community.

🔥 Jon Van Oast, Sara Beery, Oisin Mac Aodha, gvanhorn, Ben Koger, Lily Xu, Frederic, Siyu Yang
Sara Beery (sbeery@caltech.edu)
2020-01-24 23:18:51

*Thread Reply:* This is awesome! Congrats on the funding 🙂

kennedy Muriithi (mmkennedy93@gmail.com)
2020-01-30 03:59:22

Hi everyone, I'm kennedy. I'm an IT Engineer at the conservation tech lab at Olpejeta conservancy.

The techlab goal is to research, test, support, and develop new technology based solutions to conservation challanges. https://www.olpejetaconservancy.org/press-release-technology-lab-focused-on-wildlife-protection-opens-on-ol-pejeta-conservancy/

🦏 Sara Beery, Elizabeth Bondi, Ben Weinstein, Lily Xu, Riccardo Pressiani, Manish Rai
👍 Ben Koger, gvanhorn, Riccardo Pressiani, Siyu Yang
🎉 Jon Van Oast
Ben Weinstein (benweinstein2010@gmail.com)
2020-02-02 16:11:39

Hi everyone, we are hosting a web series on machine learning for remote sensing applications. All are welcome. Here is the original message from Hannah Kerner ```We're excited to move forward with our discussion group focused on machine learning for remote sensing applications. In these meetings we'll cover recent papers, tools, or other topics related to machine learning for remote sensing, and topics can evolve based on the group's interests. We are meeting every other Friday at 11am PST / 2pm EST / 8pm CET on WebEx (link below). This is a diverse yet aligned group and we think providing a regularly scheduled, semi-structured space for discussion will lead to both interesting cross-pollination and future collaborations.

Our first discussion is Friday February 7th. Tony Chang will lead a discussion on his paper "Chimera: A deep-learning approach for fusing multi-sensor data for forest classification and structural estimation."

Check out the schedule here and please let us know if you'd like to present a paper or topic for future discussions. Feel free to also share the link to join the group.

Webex link: https://umd.webex.com/meet/hkerner```

💯 Jon Van Oast, Sara Beery, Ben Koger, Hannah Kerner, Siyu Yang, Amrita Gupta, Manish Rai, Laurel Hopkins
Sara Beery (sbeery@caltech.edu)
2020-02-12 03:58:12

*Thread Reply:* Do you have the link to the schedule?

Sara Beery (sbeery@caltech.edu)
2020-02-12 03:58:35

*Thread Reply:* Or do you need to sign up for the webex to see it?

Ben Weinstein (benweinstein2010@gmail.com)
2020-02-12 13:49:03
Hannah Kerner (hkerner@umd.edu)
2020-02-17 09:48:47

*Thread Reply:* updated now!

Ben Weinstein (benweinstein2010@gmail.com)
2020-02-02 16:13:14

Here is a link to Tony’s excellent paper. https://www.mdpi.com/2072-4292/11/7/768

MDPI
🌳 Sara Beery, Frederic, Manish Rai
Ben Weinstein (benweinstein2010@gmail.com)
2020-02-18 13:24:26

Posting again for the meetup on machine learning for environmental remote sensing (thanks to @Hannah Kerner), this friday at 11am PT will by Dr. Sherrie Wang on weak supervision in remote sensing. Check out her great paper here: https://www.mdpi.com/2072-4292/12/2/207 at the Webex link: https://umd.webex.com/meet/hkerner

MDPI
Cisco Webex Site
💯 Hannah Kerner, Sara Beery, Lily Xu, Ben Koger
Amrita Gupta (agupta375@gatech.edu)
2020-02-20 12:12:44

*Thread Reply:* Is there a website or mailing list we can join/share to spread the word about these seminars?

Ben Weinstein (benweinstein2010@gmail.com)
2020-02-20 12:12:56

*Thread Reply:* ya

Ben Weinstein (benweinstein2010@gmail.com)
2020-02-20 12:13:40

*Thread Reply:* ML for Remote Sensing Reading & Discussion Group (@Hannah Kerner) I think needs to add people individually?

Ben Weinstein (benweinstein2010@gmail.com)
2020-02-20 12:13:47

*Thread Reply:* its a google group.

Amrita Gupta (agupta375@gatech.edu)
2020-02-20 12:15:43

*Thread Reply:* Thank you, I've found it!

Ben Weinstein (benweinstein2010@gmail.com)
2020-02-20 12:16:18

*Thread Reply:* were you able to join? I wasn’t sure what the settings were.

Amrita Gupta (agupta375@gatech.edu)
2020-02-20 12:18:40

*Thread Reply:* I was able to submit a form to apply. I mainly want to share this with some folks at our lab working on ML and remote sensing who aren't part of this Slack group!

👍 Ben Weinstein
Hannah Kerner (hkerner@umd.edu)
2020-02-20 16:48:26

*Thread Reply:* You should be able to request to join the google group: https://groups.google.com/d/forum/ml4rs/join then I or another organizer will approve it

Hannah Kerner (hkerner@umd.edu)
2020-02-20 16:48:54

*Thread Reply:* let me know if you have any issues!

Amrita Gupta (agupta375@gatech.edu)
2020-02-21 00:03:59

*Thread Reply:* Thanks!

Sara Beery (sbeery@caltech.edu)
2020-02-21 16:37:59

*Thread Reply:* @Amrita Gupta you're also totally welcome to invite people to this slack if you think they're interested!!

Sara Beery (sbeery@caltech.edu)
2020-02-19 12:55:31

A few upcoming workshops to have on your radar!

If you're going to be at WACV, we're running a workshop on Animal Re-Identification on March 1st: https://sites.google.com/corp/view/wacv2020animalreid/home

If you're interested in remote sensing, there's an upcoming workshop at CVPR: https://www.grss-ieee.org/earthvision2020/

If you're interested in species identification, or other fine-grained challenges, we're holding the 7th Fine-Grained Visual Categorization Workshop at CVPR (there will be a few associated biodiversity-focused kaggle challenges that will be announced in the coming weeks): https://sites.google.com/corp/view/fgvc7

🎉 Jon Van Oast, Oisin Mac Aodha, gvanhorn, Lily Xu, Elizabeth Bondi
Ben Weinstein (benweinstein2010@gmail.com)
2020-02-20 12:01:24

General question (but I hope @Ben Koger will have an opinion). I am starting a new project on wading bird detection in the everglades. Imagery is collected by drone (small white birds in bottom left). I’m inheriting this work, and the current workflow is to perform the orthomosaic and georectification and then cut the final tile into pieces for prediction. This is computationally huge (tiles are > 300 GB), creates minor distortions in the image during image matching, and seems wasteful to me, since we already have the raw images. I’d rather make the predictions on the raw images, and then worry about reducing overcounting by matching images that have the same bird in them. What do other people do? My sense is that georectification is useful, but an accurate count is top priority.

Ben Koger (benkoger@gmail.com)
2020-02-21 11:39:47

*Thread Reply:* I would definitely recommend using the raw images. Beyond just distortions, I would imagine especially at the edge of frames some birds are blurred out or disappear entirely in the tile making process.  Especially if you can assume that the ground is approximately flat, which, for the everglades I guess is mostly true, and the drone is flying at a fixed height, I don’t think the georectification will give you much extra information. You should be able to correct for most of the double counting by taking fraction of image overlap into account without needing the orthomosaic. Of course neither approach accounts for moving birds. I’d be curious to hear more about the project!

Ben Weinstein (benweinstein2010@gmail.com)
2020-02-21 12:46:46

*Thread Reply:* i’m thinking along the same lines. Eventually I think our partners will want the spatial location for other uses. Have you ever seen posthoc georectification? I don’t have a ton of experience here, but my hope is that we can get the camera information and warping from agisoft photoscan and apply those changes to annotations as well as the images. sound crazy?

Dan Morris (agentmorris@gmail.com)
2020-02-22 10:03:54

New data set @ lila.science, re: wildlife and human detection in drone images:

http://lila.science/datasets/conservationdrones

LILA BC
🐘 Sara Beery, Elizabeth Bondi, Ben Koger, Lily Xu, Siyu Yang
🎉 Jon Van Oast
Elizabeth Bondi (ebondi@g.harvard.edu)
2020-02-22 10:19:10

*Thread Reply:* Thanks, @Dan Morris! Everyone, please let me know if you have any questions!

❤️ Lily Xu
Sara Beery (sbeery@caltech.edu)
2020-02-25 13:39:26

A postdoc and a PhD position studying bird biodiversity with remote sensing have opened up at UW-Madison: https://uwmadison.co1.qualtrics.com/jfe/form/SV_aXGVhdC5eqRLSqF

"We are offering one postdoc position and one PhD position focused on remote sensing and bird biodiversity, as part of ongoing collaborations with the US Forest Service and the US Geological Survey.   The main goal of our project is to create bird biodiversity maps that are suitable for land management decisions and conservation actions.  In order to do so, we are developing new remote sensing indices designed for species distribution modeling, and making predictive maps of bird biodiversity for the conterminous U.S."

uwmadison.co1.qualtrics.com
Sara Beery (sbeery@caltech.edu)
2020-02-25 13:42:27

*Thread Reply:*

Ben Weinstein (benweinstein2010@gmail.com)
2020-02-25 14:41:51

*Thread Reply:* adding that i’ve worked with Volker and he and Anna are incredibly kind and talented scientists. highly recommended.

❤️ Sara Beery
Sara Beery (sbeery@caltech.edu)
2020-02-28 17:54:46

CALL FOR PAPERS: 7th Annual Workshop on Fine-Grained Visual Categorization at CVPR 2020

OVERVIEW FGVC7: The Seventh Workshop on Fine-Grained Visual Categorization June 19th in conjunction with CVPR 2020, June, Seattle, USA. Website: https://sites.google.com/view/fgvc7 Twitter: @fgvcworkshop

The purpose of this workshop is to bring together researchers to explore visual recognition across the continuum between basic level categorization (object recognition) and identification of individuals (face recognition, biometrics) within a category population. Participants are encouraged to submit short papers relevant to the workshop and to take part in a set of competitions organized in conjunction with  the workshop - details below. 

WORKSHOP DESCRIPTION Fine-grained categorization (called `subordinate categorization’ in the psychology literature) lies in the continuum between basic-level categorization (object recognition) and the identification of individuals (e.g., face recognition, biometrics). The visual distinctions between similar categories are often quite subtle and therefore difficult to address with today’s general-purpose object recognition machinery. This is especially true for domains where data is not readily available on the web (e.g., medical images, or depth data), or domains for which training data is limited. It is likely that a radical re-thinking of the techniques used for representation learning, architecture design, human-in-the-loop learning, few-shot, and self-supervised learning that are currently used for visual recognition will be needed to improve fine-grained categorization. It is our hope that the invited talks, including researchers from scientific application domains, will shed light on human expertise and human performance in subordinate categorization and on motivating research applications. More information about previous FGVC workshops and competitions can be found at http://www.fgvc.org/.  

PAPER SUBMISSION We invite submission of 3 page (excluding references) extended abstracts (using the CVPR 2020 format) describing work in the domains suggested above or in closely-related areas.  Accepted submissions will be presented as posters at the workshop. Reviewing of abstract submissions will be double-blind. The purpose of this workshop is not as a venue for publication, so much as a place to gather together those in the community working on or interested in FGVC. Submissions of work which has been previously published, including papers accepted to the main CVPR 2020 conference are allowed.

For more details see - https://sites.google.com/view/fgvc7/submission   Topics of interest include the following:   Fine-grained categorization * Novel datasets and data collection strategies for fine-grained categorization * Appropriate error metrics for fine-grained categorization * Low/few shot learning * Self-supervised learning * Transfer-learning from known to novel subcategories * Attribute and part based approaches ** Taxonomic predictions

Human-in-the-loop * Fine-grained categorization with humans in the loop * Embedding human experts’ knowledge into computational models * Machine teaching * Interpretable fine-grained models

Multimodal learning * Using audio and video data * Using geographical priors ** Using shape/3D information

Fine-grained applications * Product recognition * Animal biometrics and camera traps ** Museum collections e.g. biological, art, ...

PAPER SUBMISSION DATES * Submission Deadline: 27th March 2020  * Decisions: 27th April 2020 * Camera Ready Deadline: 7th May 2020 * Submission site: CMT URL will be available on our site soon https://sites.google.com/view/fgvc7/submission

COMPETITIONS We will be holding six fine-grained computer vision challenges with tasks ranging from classification of attributes in art images through to classifying diseases in plants. The competitions are hosted on Kaggle.

For more details please visit: FGVC https://sites.google.com/view/fgvc7

COMPETITION DATES * Competitions start: Mar 2020   * Competitions end: May 2020 

🐦 Oisin Mac Aodha, Subhransu Maji, Ben Weinstein
🦌 Oisin Mac Aodha, Lily Xu
🌿 Oisin Mac Aodha
✔️ Jon Van Oast, Subhransu Maji
😃 Amrita Gupta
Sara Beery (sbeery@caltech.edu)
2020-03-02 17:55:23

Cool toolkit for adapting detectors for new marine applications https://www.viametoolkit.org/

👍 Holger Klinck, gvanhorn
🐋 Stefan Schneider, Siyu Yang
Ben Weinstein (benweinstein2010@gmail.com)
2020-03-03 12:43:49

We are organizing a tree crown prediction competition if anyone wants to join, knows interested parties.

Ben Weinstein (benweinstein2010@gmail.com)
2020-03-03 12:43:52

```The IDTReeS research group invites participants to a data science competition to identify trees in remote sensing data. Read about our team and sign up for the competition: https://idtrees.org/competition/

The ecological community is confronted with questions that span large geographic extents. Fortunately, we have remote sensing data to help address these questions. What's the issue? Our tools to effectively turn remote sensing into ecological information are still limited. There are lots of advancements to be made, but we can't do it alone! We need people working in ecology, remote sensing, data science, computer science, and image processing to advance our methods.

A data science competition allows people from all disciplines to work on a standardized dataset so we can truly compare methods to find the approaches that push this work to the next level. Participants in this competition will use data from the National Ecological Observatory network to work on 2 key tasks: Identify individual trees in remote sensing images Classify trees into species Teams (or individuals) can participate in either or both tasks. Task 1 requires working directly with remote sensing data (RGB, LIDAR, and Hyperspectral). Task 2 can either leverage this raw remote sensing data or use simplified tabular data provided by the organizers.

The competition runs from March to May. The output of this competition will be a synthetic paper covering the competition, data, and comparison of different methods. Each team will have the opportunity to write up and publish an associated short paper on the methods they used and results they produced.

Check out this blog to learn more: https://jabberwocky.weecology.org/2020/02/03/data-science-competition-converting-remote-sensing-into-trees/ Read about our team and sign up for the competition: https://idtrees.org/competition/ ```

🌳 Sara Beery, Oisin Mac Aodha, Siyu Yang, Lily Xu, Elizabeth Bondi
🎉 Jon Van Oast
Zac Winzurk (zwinzurk@asu.edu)
2020-03-04 21:21:47

Hi everyone! My name is Zac and I’m undergraduate student at Arizona State University. I’m researching self-supervised methods in computer vision with the goal of reducing data annotation costs for animal detection, species recognition, and re-ID on camera trap data. Excited to join the channel and learn more about what other people are working on in this domain!

👍 Jon Van Oast, Siyu Yang, Sara Beery, Oisin Mac Aodha, gvanhorn
🐇 Siyu Yang, Sara Beery
👋 Jon Van Oast
Ben Weinstein (benweinstein2010@gmail.com)
2020-03-04 21:26:09

*Thread Reply:* Hey Zac, that’s where we are too. https://www.mdpi.com/2072-4292/11/11/1309, also see all the great work by others on online learning and active learning with human in the loop

Ben Weinstein (benweinstein2010@gmail.com)
2020-03-04 21:27:44

*Thread Reply:* https://link.springer.com/article/10.1007/s13218-020-00631-4

KI - Künstliche Intelligenz
Ben Weinstein (benweinstein2010@gmail.com)
2020-03-04 21:28:40
Zac Winzurk (zwinzurk@asu.edu)
2020-03-04 21:46:57

*Thread Reply:* Ok, awesome! Everyone else in the lab I work at works on medical images, so I’m familiar with the topics of active learning, semi-supervised learning, and self-supervised learning in that domain, but I’m still learning all what has been done so far for with wildlife image tasks. Thanks for all the links!

Sara Beery (sbeery@caltech.edu)
2020-03-09 17:24:53

The third annual iWildCam camera trap competition just launched! https://www.kaggle.com/c/iwildcam-2020-fgvc7

This year we are focusing on multimodality: how to best combine multiple available data streams to improve conservation-focused ML results. For every camera trap in both train and test we provide paired multispectral remote sensing imagery from the same location that competitors can use to try to improve generalization to new cameras. Take a crack at it, and let me or @Elijah Cole (Deactivated) know if you have any questions!

kaggle.com
🐯 Elijah Cole (Deactivated), Oisin Mac Aodha, Elizabeth Bondi, Zac Winzurk
📷 Jon Van Oast
🎉 gvanhorn
Sara Beery (sbeery@caltech.edu)
2020-03-10 14:18:00

If anyone hasn't signed up yet, you can still register for the WILDLABS virtual meetup on Acoustic Monitoring this afternoon! https://www.wildlabs.net/resources/community-announcements/wildlabs-virtual-meetup-invitation-acoustic-monitoring

WILDLABS.NET
🎉 Jon Van Oast
👍 Talia Speaker
Ben Weinstein (benweinstein2010@gmail.com)
2020-03-10 17:31:07

From the talk today, source of open source recording data for australia https://acousticobservatory.org/

acousticobservatory.org
👍 Jon Van Oast, Talia Speaker, Sara Beery, Ruth Taylor
Ben Weinstein (benweinstein2010@gmail.com)
2020-03-16 16:20:07

Hi all, just forwarding @Hannah Kerner message for our weekly Machine Learning for Remote Sensing Meetup. Ironically, I’m presenting this week. Hi everyone,

I hope you are all staying healthy and having productive isolations. We’ll have our next discussion of machine learning for remote sensing applications this Friday at 11am PST / 2pm EST / 8pm CET.

This week Ben Weinstein will be discussing his work on tree crown detection using weakly supervised deep learning methods. We can look forward to a fun discussion around multi-sensor data, geographic generalization, and semi-supervision. Here is his latest paper on this topic, as well as a python package and benchmark dataset.

As always, you can see the schedule of future talks here, and let us know if you would like to present (your ongoing work, a recent paper, etc) at a future meeting! Feel free to share the link to join the group with others. 

Webex link: https://umd.webex.com/meet/hkerner

Cheers, Hannah and Patrick

💯 Sara Beery
🎉 Hannah Kerner, Jon Van Oast
👍 Elijah Cole (Deactivated), Riccardo de Lutio, Siyu Yang
Sara Beery (sbeery@caltech.edu)
2020-03-23 13:47:53

This Digital Data in Biodiversity conference is going fully digital this year, and registration fees are now optional:

https://www.idigbio.org/content/digital-data-2020-harnessing-data-revolution-and-amplifying-collections-biodiversity

iDigBio
👍 Jon Van Oast, Ștefan Istrate, Riccardo de Lutio, Ben Koger
🗺️ Stefan Schneider
Sara Beery (sbeery@caltech.edu)
2020-03-26 11:02:01

Shah Selbe is throwing a conservation tech happy hour on zoom this evening, 4pm PST 🍻

Join Zoom Meeting https://zoom.us/j/393211446

Meeting ID: 393 211 446

One tap mobile

Dial by your location     +1 669 900 6833 US (San Jose)     +1 346 248 7799 US (Houston)     +1 312 626 6799 US (Chicago)     +1 929 205 6099 US (New York)     +1 253 215 8782 US     +1 301 715 8592 US Meeting ID: 393 211 446 Find your local number: https://zoom.us/u/ad3PBWZb4i

🍻 Elijah Cole (Deactivated), Shah
🎉 Jon Van Oast, Shah
Sara Beery (sbeery@caltech.edu)
2020-03-26 17:26:43

Also, WILDLABS has brought back it's monthly digest on conservation tech. See the March write-up here:https://mailchi.mp/wildlabs/digest-march-2020?e=7b9d646075

mailchi.mp
🎉 Jon Van Oast
👍 Talia Speaker
Sara Beery (sbeery@caltech.edu)
2020-03-30 18:35:48

If anyone wants to join, I'm giving the CompSust Open Graduate Seminar this Friday, 1:30_2:30pm ET. Details to follow in a comment to avoid spamming the feed 🙂

🐦 Holger Klinck, Lily Xu, Elizabeth Bondi
👍 Nathan Hahn, Siyu Yang
🎉 Jon Van Oast, Talia Speaker
Sara Beery (sbeery@caltech.edu)
2020-03-30 18:36:48

*Thread Reply:* Please join us for the fifth CompSust Open Graduate Seminar for spring 2020! This talk will take place on Friday, April 03, 2020, from 1:30–2:30 pm Eastern Time. 

Sara Beery from CalTech will be presenting on "Improving Computer Vision for Camera Traps: Leveraging Practitioner Insight to Build Solutions for Real-World Challenges". Details about the talk and how to join are provided below. 

Please join the meeting via the link: tinyurl.com/cogs2020 Join by telephone US (New York) Meeting ID: 138 884 216 One tap mobile , 138884216# US (New York)   International numbers available: https://harvard.zoom.us/u/ab1q1IVGx4

Join by SIP conference room system Meeting ID: 138 884 216 138884216@zoomcrc.com

Title: Improving Computer Vision for Camera Traps: Leveraging Practitioner Insight to Build Solutions for Real-World Challenges

Abstract: Camera traps are widely used to monitor animal populations and behavior, and generate vast amounts of data. There is a demonstrated need for machine learning models that can automate the process of detecting and classifying animals in camera trap images. Previous work has shown exciting results on automated species classification in camera trap data, but further analysis has shown that these results do not generalize to new cameras or new geographical regions, and struggle to categorize rare species or poor quality images. Consequently, very few organizations have successfully deployed machine learning tools for camera trap image review. I will discuss my recent work tackling these real-world challenges with improved model architectures, data de-siloing, and data augmentation methods, and building accessible tools for biologists with Microsoft AI for Earth and Wildlife Insights.

Sara Beery (sbeery@caltech.edu)
2020-04-03 13:20:04

*Thread Reply:* My COGS talk starts in 10 minutes! I'm excited :)

😊 Lily Xu
🎉 Jon Van Oast
Jon Van Oast (jon@wildme.org)
2020-04-03 13:49:05

*Thread Reply:* oh my... well, yes. a reminder perhaps we all should use passwords! 😮

Jon Van Oast (jon@wildme.org)
2020-04-03 13:50:17

*Thread Reply:* ( i had my audio on and was coding, kinda zoned out -- then came out of my code-fog to realize there was commotion.... haha.... sigh. )

Sara Beery (sbeery@caltech.edu)
2020-04-03 14:27:05

*Thread Reply:* Yeah, I get the feeling the COGS organizers will definitely be adding a password after that 😕

Jon Van Oast (jon@wildme.org)
2020-04-03 14:30:32

*Thread Reply:* for sure. i have had 3 such meetings bombed in recent week+ ..... i guess all the middle schoolers left out of school with nothing to do. 😞 ha.... at least it is an excellent working example of how security by obscurity doesnt work well. 😄 excellent talk though. i have to try so hard not to get distracted by camera traps (until someone actually gives me official work on the subject i mean).

Sara Beery (sbeery@caltech.edu)
2020-04-03 14:32:22

*Thread Reply:* Do it!!!! Camera traps for everyone!

💯 Jon Van Oast
Jon Van Oast (jon@wildme.org)
2020-04-03 14:35:23

*Thread Reply:* no tempting! lol

Jon Van Oast (jon@wildme.org)
2020-04-03 14:35:25

*Thread Reply:* not until i get budget/instructions.

Jon Van Oast (jon@wildme.org)
2020-04-03 14:35:34

*Thread Reply:* we do have these real world giraffe camera trap images... and i am the giraffe tech lead.... mwahaha

Jon Van Oast (jon@wildme.org)
2020-04-03 14:35:49

*Thread Reply:* <resists>

Sara Beery (sbeery@caltech.edu)
2020-04-03 14:35:55

*Thread Reply:* 🔥

Jon Van Oast (jon@wildme.org)
2020-04-03 14:38:25

*Thread Reply:* heck, i even want to start out with training a simple detector of camera-trap-source vs not-camera-trap-source. cuz who even knows what photos i have over there in giraffeland. #distracted

Sara Beery (sbeery@caltech.edu)
2020-04-03 14:39:28

*Thread Reply:* I think it would probably be pretty easy to train, most camera trap images have those weird timestamps/logos.

Jon Van Oast (jon@wildme.org)
2020-04-03 14:40:34

*Thread Reply:* exactly. and that way we take the burden off the user from caring about that distinction. i agree/hope also that it would be a fairly easy task. maybe one of my "its a weekend in lockdown" unofficial dabblings....

👍 Sara Beery
Shah (shah@conservify.org)
2020-04-03 00:22:15

I got this in a newsletter from AI LA. Not sure about if it’s going to be good or not, but I wanted to share if anyone is interested: https://www.eventbrite.com/e/computer-vision-on-the-edge-webinar-tickets-97793104809

Eventbrite
👍 Jon Van Oast, Sara Beery, Sam Kelly
Sara Beery (sbeery@caltech.edu)
2020-04-10 12:50:55

Hey everyone! I'm trying to get a better understanding of species distribution modeling and population estimation, I want to understand what the outputs of my ML models are used for! Trying to do a lit review, but since it's not my main area I'm worried I might be missing things. Can anyone point me to seminal papers in these areas, or good review papers?

👍 Siyu Yang
Jon Van Oast (jon@wildme.org)
2020-04-10 13:36:48

*Thread Reply:* lemme try to rope @Jason Holmberg (Wild Me) and @Jason Parham in on this one!

Sara Beery (sbeery@caltech.edu)
2020-04-10 13:39:14

*Thread Reply:* Thanks @Jon Van Oast!

👍 Jon Van Oast
Ben Weinstein (benweinstein2010@gmail.com)
2020-04-10 17:04:49

*Thread Reply:* hi!! I know way too much about thiiiisss. This was most of my PhD. Here are the canonical citations. What would you like to know. https://www.annualreviews.org/doi/full/10.1146/annurev.ecolsys.110308.120159 https://onlinelibrary.wiley.com/doi/full/10.1111/j.2006.0906-7590.04596.x https://onlinelibrary.wiley.com/doi/full/10.1046/j.1466-822X.2003.00042.x

🙌 Sara Beery
Sara Beery (sbeery@caltech.edu)
2020-04-10 17:06:09

*Thread Reply:* @Ben Weinstein how about I read these and then we meet and discuss sometime next week?

Ben Weinstein (benweinstein2010@gmail.com)
2020-04-10 17:06:45

*Thread Reply:* sure. Has to be between 1:30-3ish on a weekday to make sure the kid is asleep.

Sara Beery (sbeery@caltech.edu)
2020-04-10 17:07:45

*Thread Reply:* Works for me! How about 1:30pm on Wednesday? I'll send a meeting invite 🙂

👍 Ben Weinstein
Ben Weinstein (benweinstein2010@gmail.com)
2020-04-10 17:08:54

*Thread Reply:* one more for background https://onlinelibrary.wiley.com/doi/full/10.1111/j.1461-0248.2005.00792.x

👍 Sara Beery
Sara Beery (sbeery@caltech.edu)
2020-04-10 17:10:45

*Thread Reply:* Invited @Elijah Cole (Deactivated) to join the party 🙂

Ben Weinstein (benweinstein2010@gmail.com)
2020-04-10 17:23:14

*Thread Reply:* Also worth checking out, may help automate anything you are planning to do https://wallaceecomod.github.io/

wallaceecomod.github.io
🎉 Jon Van Oast, Sara Beery
Sara Beery (sbeery@caltech.edu)
2020-04-10 17:23:51

*Thread Reply:* @Ben Weinstein you are magnificent

💯 Jon Van Oast
Ben Weinstein (benweinstein2010@gmail.com)
2020-04-10 17:24:31

*Thread Reply:* lol, you literally couldn’t have picked something easier for me to do.

😂 Sara Beery
😄 Jon Van Oast
Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2020-04-11 04:20:25

*Thread Reply:* I am highly interested in that too. So thank you very much for the literature tips!

❤️ Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2020-04-15 16:40:06

*Thread Reply:* I’m here.

👍 Benjamin Kellenberger
Ben Weinstein (benweinstein2010@gmail.com)
2020-04-15 17:29:45
Ben Weinstein (benweinstein2010@gmail.com)
2020-04-15 17:30:54

*Thread Reply:* https://www.pnas.org/content/114/49/12976.short

PNAS
Tony Chang (tony@csp-inc.org)
2020-04-10 13:26:15

@Sara Beery this is a good overview https://onlinelibrary.wiley.com/doi/10.1111/j.1461-0248.2005.00792.x

❤️ Sara Beery
Sara Beery (sbeery@caltech.edu)
2020-04-10 13:29:49

*Thread Reply:* Thanks! These look awesome 🙂

Sara Beery (sbeery@caltech.edu)
2020-04-21 19:40:01

Conservation X Labs is hosting a free "ideathon" this Saturday to workshop conservation technology ideas: https://conservationxlabs.com/ideathon

Conservation X Labs
🎉 Jon Van Oast, Ben Koger, Chad Gallinat, Ruth Taylor
Sara Beery (sbeery@caltech.edu)
2020-04-22 12:59:02

Any other cool Earth Day happenings that people should know about? Let's start a thread!

🎉 Jon Van Oast, Siyu Yang, Frederic
Sara Beery (sbeery@caltech.edu)
2020-04-22 13:00:20

*Thread Reply:* FieldKit is doing an earth day giveaway: https://www.fieldkit.org/blog/fieldkit50-earth-day-giveaway/

FieldKit
Sara Beery (sbeery@caltech.edu)
2020-04-22 13:02:38

*Thread Reply:* Meredith Palmer gave workshop on how people can contribute as citizen scientists during quarantine: https://scistarter.org/go-on-safari-with-citizen-science

SciStarter
Suzanne Stathatos (suzanne.stathatos@gmail.com)
2020-04-22 14:12:19

*Thread Reply:* part of NASA's #EarthDayAtHome activities include a cute game that helps the Ames supercomputer learn coral classifications

Suzanne Stathatos (suzanne.stathatos@gmail.com)
2020-04-22 14:12:20

*Thread Reply:* http://nemonet.info/

👏 Sara Beery
Siyu Yang (yasiyu@microsoft.com)
2020-04-22 14:22:57

*Thread Reply:* We’re doing a bioblitz on iNaturalist for offices everywhere, to which I’ve made zero contributions so far 🤓 https://www.inaturalist.org/projects/microsoft-bioblitz

iNaturalist
🌿 Sara Beery
Siyu Yang (yasiyu@microsoft.com)
2020-04-22 15:34:39

*Thread Reply:* Also I just started using Ecosia, a search engine that uses ads profit to plant trees! https://www.ecosia.org/ Search results come from Bing

ecosia.org
🌳 Sara Beery, Suzanne Stathatos, hieule
👍 Sara Beery
Jon Van Oast (jon@wildme.org)
2020-04-23 13:37:15

fyi, via aaai newsletter, new AI website: aihub.org

Sara Beery (sbeery@caltech.edu)
2020-04-23 13:38:14

*Thread Reply:* Which article?

Jon Van Oast (jon@wildme.org)
2020-04-23 13:48:25

*Thread Reply:* sorry - is a new site ... will update original.

Sara Beery (sbeery@caltech.edu)
2020-04-23 13:50:01

*Thread Reply:* Ah, cool!

Sara Beery (sbeery@caltech.edu)
2020-04-23 13:50:09

*Thread Reply:* Thanks for the pointer 🙂

Jon Van Oast (jon@wildme.org)
2020-04-23 14:28:19

*Thread Reply:* np. yeah that was kind of ridiculously unclear the way i stated it. thanks for the nudge.

❤️ Sara Beery
Lukas Liebel (lukas.liebel@tum.de)
2020-04-24 03:52:17

The workshop on Computer Vision Problems in Plant Phenotyping (CVPPP) is being held again at ECCV. Accepted full papers will be published in the ECCV workshops proceedings. Submission deadline is June 15th 2020. They also host some challenges. https://www.plant-phenotyping.org/CVPPP2020-CfP

🌿 Sara Beery, Siyu Yang
😎 Jon Van Oast
Jason Parham (bluemellophone@gmail.com)
2020-04-27 17:13:53

Hello everybody!

Jason Parham (bluemellophone@gmail.com)
2020-04-27 17:14:03

I’m starting a new channel called #help_needed

Sara Beery (sbeery@caltech.edu)
2020-04-27 17:16:37

*Thread Reply:* Not sure if you meant to spell need with three e's, but I vote to keep it. Emphasizes the neeed 😁

😍 Jon Van Oast
😀 Lukas Liebel
Jason Parham (bluemellophone@gmail.com)
2020-04-27 17:14:50

The purpose is to list research tasks that people need help on where we can all share a larger labor pool and volunteer when large workloads are on our todo lists

✔️ Jon Van Oast, Sara Beery, Siyu Yang, Stefan Schneider, Lukas Liebel
Jason Parham (bluemellophone@gmail.com)
2020-04-27 17:15:23

I am kicking this off because Wild me has a large project that some here might find interesting and a worthy cause to volunteer some time

👍 Sara Beery
Jason Parham (bluemellophone@gmail.com)
2020-04-27 18:02:35

A new project has appeared that Wild Me needs help with!

} Jason Parham (https://aiforconservation.slack.com/team/ULWJJ58UB)
😍 Jon Van Oast
Sara Beery (sbeery@caltech.edu)
2020-05-08 12:18:38

Do all the new people who have joined want to introduce themselves? Welcome to the community!!

🎉 Jon Van Oast, gvanhorn, J. Miguel Valverde
🙌 Suzanne Stathatos, David Healey
Sara Beery (sbeery@caltech.edu)
2020-06-09 12:38:32

*Thread Reply:* New members, intro yourselves! Where are you from? What are you working on? What problems are you interested in?

Jonathan Granskog (jonathan.granskog@gmail.com)
2020-06-09 12:52:17

*Thread Reply:* Sure, I’ll introduce myself. 🙂 I’m a Swedish-speaking Finn who lives in Switzerland. Currently, I’m working on deep learning for computer graphics rendering at NVIDIA. It has nothing to do with conservation unfortunately, but I’m hoping to be inspired by others on this channel and maybe start some side projects.

👏 Sara Beery
Sara Beery (sbeery@caltech.edu)
2020-06-09 12:54:13

*Thread Reply:* There is some cool work on graphics rendering for conservation coming from Michael Black's group, check out @Silvia Zuffi's ICCV 2019 paper https://arxiv.org/abs/1908.07201

arXiv.org
Jonathan Granskog (jonathan.granskog@gmail.com)
2020-06-09 12:58:34

*Thread Reply:* Ooh, thanks for sharing that. I’ll read it asap. 😄

J. Miguel Valverde (juanmiguel.valverde@uef.fi)
2020-06-12 02:56:10

*Thread Reply:* Hi! My name is Miguel and I'm currently working on rodent brain lesion segmentation in Finland, so basically segmentation on medical images with deep learning :) I find very interesting everything related to environment + computer science, and more specifically, deep learning, so I'm quite interested in following this topic more closely :) Nice to meet you and thanks for letting me in this channel.

🎉 Sara Beery
Suzanne Stathatos (suzanne.stathatos@gmail.com)
2020-05-08 20:53:39

Hi everyone! I'm pretty new here. My name is Suzanne! I volunteer for a group called Rainforest Connection (link below) and moonlight as a software engineer in the bay area. I didn't know until recently about this group. Super cool to be able to e-meet so many likeminded folks https://rfcx.org/

rfcx.org
🌿 Sara Beery, Zac Winzurk, Oisin Mac Aodha, David, Omiros Pantazis, Elizabeth Bondi, Siyu Yang
🌴 Lily Xu, Sara Beery, David Rolnick
Siyu Yang (yasiyu@microsoft.com)
2020-05-11 17:13:16

*Thread Reply:* Hi Suzanne! I met Topher White at a talk last year; rfcx is awesome

Suzanne Stathatos (suzanne.stathatos@gmail.com)
2020-05-11 19:43:32

*Thread Reply:* @Siyu Yang Amazing! Indeed it is 🙂

David Healey (david.w.healey@gmail.com)
2020-05-11 10:27:28

Hey all, I'm new here too! Been doing applied ML in industry for the last 5 years, most recently 3 years doing computer vision for drug discovery. I'm just getting into camera trap image analysis by way of helping some people locally (Utah, US) classify their images. I did my PhD in computational & experimental ecology background, so it feels good to make my way back toward home a bit. I'm interested in meeting everyone and hearing what everyone's working on! Looking for the right space to continue working on the technology. If we haven't met yet, please reach out or expect me to reach out soon!

🐐 Sara Beery, Elizabeth Bondi, Lily Xu, Omiros Pantazis, Siyu Yang
Ben Weinstein (benweinstein2010@gmail.com)
2020-05-11 17:20:41

*Thread Reply:* Welcome. If you haven’t already been put in touch with @Siyu Yang, check out the megadetector for the camera trap needs. Just out curiosity, what are the challenges/workflow for computer vision and drug discovery?

David Healey (david.w.healey@gmail.com)
2020-05-12 00:19:22

*Thread Reply:* Hey Ben, I've been making great use of the megadetector (Thanks @Siyu Yang!)

Where I worked (recursionpharma.com) it was about analyzing cell morphology to characterize disease models and their interactions with potential drug candidates. It was high-throughput screening.

The biggest challenges are deconvolving experimental variation with actual biology a la https://www.kaggle.com/c/recursion-cellular-image-classification and just the general difficulty in associating morphological differences with underlying biology

kaggle.com
David Rolnick (dsrolnick@gmail.com)
2020-05-11 17:08:10

Hi everyone! I'm a postdoc at UPenn, starting as faculty at McGill/Mila in the fall, and also co-lead the Climate Change AI initiative on facilitating work at the intersection of machine learning and climate change (climatechange.ai).

🌍 Sara Beery, Siyu Yang, Oisin Mac Aodha, Ben Weinstein, gvanhorn, Lily Xu, Elizabeth Bondi, David Healey, David, Omiros Pantazis, Suzanne Stathatos
👏 Subhransu Maji, Ruth Taylor, gvanhorn, David
Sara Beery (sbeery@caltech.edu)
2020-05-14 18:14:33

Upcoming KDD Workshop: https://ai4good.org/fragile-earth/

"FEED20: Fragile Earth: Data Science for a Sustainable Planet We are excited to announce the Fragile Earth workshop at KDD ’20, Earth Day in San Diego on August 24th. Fragile Earth will bring together the research, industry, and policy community around enhancing scientific discovery in the earth sciences through the joint use of data, theory, and computation.  Whether it is food security, water scarcity, energy use, land restoration, climate models, or the incorporation of theoretical thinking into data driven frameworks to accelerate progress on the United Nations’ Sustainable Development Goals, and related areas, we invite you to be part of the community! We solicit three categories of papers + posters: research papers, extended abstracts, and position/vision papers.  Research papers should be 8-10 pages in length, extended abstracts 1-4 pages and position or vision papers between 2 to 6 pages. All should follow the ACM template: https://www.acm.org/publications/proceedings-template. Posters must fit standard 24″ x 36″ poster boards, and authors must print the posters themselves ahead of the conference. All submissions in PDF format please!"

Harry Horsley (harryhorsley9@gmail.com)
2020-05-18 11:22:38

Hello all, I'm new here. Currently studying for an MEng in Electrical and Information Sciences at the University of Cambridge and looking to apply this to conservation in the future. At the moment I'm learning about the field, seeing what's out there and trying to figure out an appropriate Master's project for next year.

❤️ Sara Beery
👍 Benjamin Kellenberger, Lily Xu, David Healey, Elizabeth Bondi, Lukas Liebel, Siyu Yang
Ben Seleb (bseleb3@gatech.edu)
2020-05-22 12:13:31

Hi everybody. I’m an incoming robotics PhD student at Georgia tech extremely interested in the development of technology for wildlife and conservation. I also help teach an undergrad course that introduces engineers to conservation!

🐘 Lily Xu, Sara Beery, Oisin Mac Aodha, Elizabeth Bondi, Manish Rai
🤖 David Rolnick, Siyu Yang
Ben Weinstein (benweinstein2010@gmail.com)
2020-05-22 13:29:16

My lab is running a competition for tree crown detection and species identification. For early-career students, this is a nice opportunity to tackle a really hard problem with wide applications. https://idtrees.org/competition/

🌳 Sara Beery, gvanhorn, Elizabeth Bondi, Siyu Yang, David
Ben Weinstein (benweinstein2010@gmail.com)
2020-05-24 21:45:21

does anyone know of examples/papers of ensemble object detection models? Combining anchor box predictions across multi-views of a fixed image (over time).

Riccardo de Lutio (riccardo.delutio@geod.baug.ethz.ch)
2020-05-25 04:37:02

*Thread Reply:* This paper could be interesting, it’s about combining object detection and instance re-identification from multiple views. The setup is to detect urban trees in multiple streetview images and combine these detections in order to create an inventory of geolocalized trees. http://openaccess.thecvf.com/content_ICCV_2019/papers/Nassar_Simultaneous_Multi-View_Instance_Detection_With_Learned_Geometric_Soft-Constraints_ICCV_2019_paper.pdf

👍 Ben Weinstein, Sara Beery
Sara Beery (sbeery@caltech.edu)
2020-05-26 13:06:33

*Thread Reply:* I'd look at the video object detection literature, previously there was a large focus on LSTM and/or 3D convolution-based approaches, recently it seems that attention (like my context r-cnn paper) is very popular and works quite well.

Sara Beery (sbeery@caltech.edu)
2020-06-05 14:22:10

Hi everyone! I want to explicitly state that this slack channel should be a safe space for anyone, regardless of race, sex, gender, or nationality, to feel comfortable participating. As the founder of this slack channel, please reach out to me and let me know if you have ever experienced anything in this space that has made you feel unwelcome so that I can work to make our community inclusive for all. Specifically, I want to make sure that any Black members of this community feel supported. I, and I am sure many others here as well, believe that Black lives matter.

👍 gvanhorn, Holger Klinck, Elijah Cole (Deactivated), Riccardo de Lutio, Mathias Tobler, Omiros Pantazis, David, Stefan Schneider, Mikey Tabak
❤️ Hannah Kerner, Harry Horsley, Lily Xu, Nathaniel Rindlaub, Megan Cromp, David Rolnick, Elizabeth Bondi, Ben Koger, Siyu Yang, Zac Winzurk, Suzanne Stathatos, David, Björn Lütjens, Sam Kelly, Stefan Schneider, Carly Batist, Greg Lipstein
Sara Beery (sbeery@caltech.edu)
2020-06-08 17:34:58

A question for the conservationists, what do you consider to be the biggest conservation success stories in the US? Are those the projects that get media attention, or no?

Ben Weinstein (benweinstein2010@gmail.com)
2020-06-08 17:37:48

*Thread Reply:* bald eagle population rebounds are pretty spectacular

🦅 Sara Beery, Siyu Yang, Björn Lütjens
Holger Klinck (hk829@cornell.edu)
2020-06-08 17:39:38

*Thread Reply:* Northern elephant seals for sure!

👍 Sara Beery, Siyu Yang
Ben Weinstein (benweinstein2010@gmail.com)
2020-06-08 17:41:13

*Thread Reply:* antarctic whale recovery rates are pretty stellar, mostly humpbacks. If current estimates of 1900s era harvesting rates are true, its amazing there are any whales in the southern ocean.

🐋 Sara Beery, Siyu Yang, Björn Lütjens
Sara Beery (sbeery@caltech.edu)
2020-06-08 17:42:25

*Thread Reply:* Follow-up question, what about these conservation efforts made them so effective?

Ben Weinstein (benweinstein2010@gmail.com)
2020-06-08 17:43:07

*Thread Reply:* there was a specific and targeted set of threats that came from a narrow set of actors.

👍 Sara Beery, Björn Lütjens
Ben Weinstein (benweinstein2010@gmail.com)
2020-06-08 17:45:39

*Thread Reply:* eagles -> ddt , whales -> sailors. The diffuse nature of most conservation challenges (climate, habitat loss) prevents specific legislation or actors. Also, in both case, the demand side of the equation made sense. Whale oil =/meat became less desirable after petroleum deposits were found. There were alternatives to other pesticides besides DDT. There was a technical solution that more or less reduced the incentive to harvest. Which is why you still get pangolin harvests, even though there is a specific actor (poachers). The demand remains.

💰 Sara Beery
👍 Siyu Yang, Björn Lütjens
Sara Beery (sbeery@caltech.edu)
2020-06-08 17:46:27

*Thread Reply:* Any success stories you can think of not centered around a single species? Like, I don't know, a regional success story?

Sara Beery (sbeery@caltech.edu)
2020-06-08 17:46:38

*Thread Reply:* Or something?

Holger Klinck (hk829@cornell.edu)
2020-06-08 17:47:32

*Thread Reply:* Migratory Bird Act which is currently being scrutinized.

😡 Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2020-06-08 17:48:14

*Thread Reply:* much more controversial -> tiger reserves in India. There are alot more tigers and its been a boon for wildlife, but there are alot of questions about human-wildlife conflict and fairness. Outside of my expertise.

Ben Weinstein (benweinstein2010@gmail.com)
2020-06-08 17:48:57

*Thread Reply:* you can definitely find some indian conservationalists (my friend anusha shankar will talk to you forever) if you want more detail there.

🐅 Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2020-06-08 17:50:07

*Thread Reply:* https://anushashankar.weebly.com/ (tell her I sent ya)

anushashankar.weebly.com
Sara Beery (sbeery@caltech.edu)
2020-06-08 17:50:27

*Thread Reply:* Awesome, thanks!

Ben Weinstein (benweinstein2010@gmail.com)
2020-06-08 17:51:58

*Thread Reply:* it makes for good reading. https://www.nature.com/articles/d41586-019-03267-z

Nature
Dan Sheldon (sheldon@cs.umass.edu)
2020-06-08 18:39:45

*Thread Reply:* For >1 species how about waterfowl conservation in North America? https://en.wikipedia.org/wiki/North_American_Waterfowl_Management_Plan (also wetlands conservation, public-private partnerships, etc.)

} Wikipedia (https://en.wikipedia.org/)
🦆 Sara Beery
David Rolnick (dsrolnick@gmail.com)
2020-06-08 18:49:14

*Thread Reply:* Clean Water Act is a big one. Not just with conservation in mind, but hugely impactful.

🐟 Sara Beery
Sara Beery (sbeery@caltech.edu)
2020-06-08 18:59:24

*Thread Reply:* So, I'm seeing trends of stabilization/regrowth of specific populations - particularly when it can be made financially viable, and large-scale far-reaching protective legislation. What else is successful?

Siyu Yang (yasiyu@microsoft.com)
2020-06-09 04:40:00

*Thread Reply:* Not a conservationist but I read Noah’s Choice last year which tells stories around the Endangered Species Act. Slightly dated but very good perspective on what makes conservation policies successful/challenging and a very fun read https://www.goodreads.com/book/show/24022445

goodreads.com
👍 Sara Beery
Carly Batist (cbatist@gradcenter.cuny.edu)
2020-06-09 10:49:05

*Thread Reply:* California Condor recovery program! https://www.fws.gov/cno/es/calcondor/CondorResources.cfm

fws.gov
👍 Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2020-06-08 19:23:18

I’m giving a talk on weds in Ecuador (virtually). A mix of english and spanish. There are some great field researchers there. If there is anyone here who wants to get involved with work in the tropics, both on data processing side, as well applied conservation, happy to use this talk as a networking opportunity. All are welcome. @Siyu Yang megadetector gets a shoutout. @gvanhorn, merlin too. I’m sure you’ll get a couple emails about applications. I’m happy to stick in slides if there are other projects that might be useful to advertise (@Holger Klinck and I just talked about audio analysis in the tropics, i put it under future opportunities)

🙌 Sara Beery, Lily Xu, gvanhorn, Benjamin Kellenberger, Siyu Yang, Suzanne Stathatos, Björn Lütjens
Lily Xu (lily_xu@g.harvard.edu)
2020-06-08 19:27:37

*Thread Reply:* sounds interesting! to be sure i'm getting time zones correct: that's 12pm ET on wednesday?

👍 Ben Weinstein
Sara Beery (sbeery@caltech.edu)
2020-06-08 19:35:24

*Thread Reply:* I'd love to hear your talk! How lost would I be with my "very good at ordering tacos" Spanish?

Ben Weinstein (benweinstein2010@gmail.com)
2020-06-08 19:37:35

*Thread Reply:* You’ll be fine, I still haven’t heard back about what language I should be speaking in. So all the slides are in English right now. I might speak in Spanish, but it’s really up to the talk organizer.

👍 Sara Beery
Mikey Tabak (tabakma@gmail.com)
2020-06-09 07:08:55

Is anyone in this group using computer vision for drone images? I'm starting a project where we're trying to detect a really small object from really high above. I'm planning to use local feature extraction and wondering what the best current options are.

Sara Beery (sbeery@caltech.edu)
2020-06-09 11:05:51

*Thread Reply:* @Elizabeth Bondi is! So is @Benjamin Kellenberger!

Elizabeth Bondi (ebondi@g.harvard.edu)
2020-06-09 12:06:47

*Thread Reply:* Hi @Mikey Tabak! My experience is primarily with thermal imagery from drones, and so far we've found that in this context, Faster RCNN with VGG-16 works better compared with some other baselines. More detailed results can be found in our recent paper here: https://teamcore.seas.harvard.edu/files/teamcore/files/2020_07_teamcore_wacv_birdsai.pdf. Please let me know if you have any questions!

❤️ Mikey Tabak
Ben Weinstein (benweinstein2010@gmail.com)
2020-06-09 12:42:53

*Thread Reply:* @Mikey Tabak can you show a photo to get some details? Lots/few of objects in an image? Here is a recent nesting bird drone model I did. Transfer learned from my tree model available here (https://deepforest.readthedocs.io/) Keras-retinanet is the backbone engine.

❤️ Sara Beery, Mikey Tabak
Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2020-06-09 13:16:09

*Thread Reply:* Good call @Ben Weinstein @Elizabeth Bondi! The success of aerial wildlife detection depends on the background complexity and the number of annotations available. For example, I’ve had good results with rarely occurring animals, but it required some tweaking. Here’s our work on weakly-supervised mammal detection from drones (done with a modified ResNet-18): http://openaccess.thecvf.com/content_CVPRW_2019/papers/EarthVision/Kellenberger_When_a_Few_Clicks_Make_All_the_Difference_Improving_Weakly-Supervised_CVPRW_2019_paper.pdf On the other hand, I also recently tried extremely densely populated scenes. I was able to use the same model, but had to use some other tricks (paper in the making). I guess we really need some examples to get a better idea.

In the meantime, my recommendations are: • Think first if you really need bounding boxes, or if points suffice. In the latter case, you can use smaller models (as I did), which are easier to implement, better to train on small datasets, and faster. • If you need bounding boxes, both Ben and Liz’ recommendations (Faster R-CNN and RetinaNet) are great. • As an underlying feature extractor (part of any convolutional neural network), I personally like ResNet. VGG as mentioned by Liz also is extremely powerful, but super big. One last question: do you already have labels? If you need to annotate images, I’ll cheekily take this opportunity to advertise our project as well. It’s called “AIDE” and is an annotation interface with deep learning models built-in (such as RetinaNet): https://github.com/microsoft/aerial_wildlife_detection (I’ll leave a proper introduction of AIDE in the chat for another time 🙂)

❤️ Sara Beery, Mikey Tabak
Mikey Tabak (tabakma@gmail.com)
2020-06-09 14:44:23

*Thread Reply:* Thank you all for your ideas! Unfortunately I can't share any images because of intellectual property issues. We were debating having technicians "paint" the objects of interest vs bounding boxes. I have not seen anyone use points instead of boxes as you mentioned @Benjamin Kellenberger. This might be the most sensible for our dataset.

Mikey Tabak (tabakma@gmail.com)
2020-06-09 14:45:15

*Thread Reply:* A followup question: What types of gui software do y'all use to have technicians provide bounding boxes or points of objects for detection models?

Ben Weinstein (benweinstein2010@gmail.com)
2020-06-09 14:47:51

*Thread Reply:* https://rectlabel.com/

RectLabel
👍 Mikey Tabak
Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2020-06-09 14:49:12

*Thread Reply:* We have our own software AIDE that supports labels, points, bounding boxes, and segmentation masks and has deep learning models built-in: https://github.com/microsoft/aerial_wildlife_detection

AIDE is under active development (by me), but already quite usable (and being used for wildlife conservation at this moment). Plus, it’s completely free and open source.

GitHub
👍 Sara Beery, Mikey Tabak
Lukas Liebel (lukas.liebel@tum.de)
2020-06-10 04:23:22

*Thread Reply:* Hi there, we recently finished a project for segmenting invasive plants from UAV RGB images with around 3.8 mm ground-sampling distance. A student of mine did a comprehensive study also on different networks. We didn't work with detection networks though. Still, let me know if you have any other questions. Maybe I can share some insights. We're also looking to publish this work in the next time. Code should be available soon as well!

🌿 Sara Beery
👍 Mikey Tabak
Mikey Tabak (tabakma@gmail.com)
2020-06-11 14:16:40

*Thread Reply:* Thank you @Benjamin Kellenberger and @Lukas Liebel!

Mikey Tabak (tabakma@gmail.com)
2020-06-11 14:17:33

*Thread Reply:* and @Ben Weinstein!

Mikey Tabak (tabakma@gmail.com)
2020-06-30 10:22:49

*Thread Reply:* Another followup question on analyzing drone images: What type of packages/code have y'all used to take aerial photos that are georeferenced, crop them, detect objects (or segment pixels), and then put them back into a georeferenced format. This seems like a pretty big challenge that I assume others have addressed.

Lukas Liebel (lukas.liebel@tum.de)
2020-06-30 10:57:03

*Thread Reply:* You mean, something like 'rasterio' to read and write georeferenced images with python?

👍 Mikey Tabak
Ben Weinstein (benweinstein2010@gmail.com)
2020-06-30 11:56:48

*Thread Reply:* I’ve written a few gists around this idea, you’ll need to be a bit more specific, but this will give you the flavor https://gist.github.com/bw4sz/e2fff9c9df0ae26bd2bfa8953ec4a24c

👍 Mikey Tabak
Lukas Liebel (lukas.liebel@tum.de)
2020-06-30 12:15:36

*Thread Reply:* As far as cropping patches for training from a bigger image or mosaic is concerned, windowed reading/writing using rasterio is very helpful. It also allows you to easily save prediction results/labels/whatever (basically everything that can be converted to a numpy array and, in the easiest case, is of the same shape as your image data) to georeferenced tiffs. A simple workflow could be: • open image as dataset • read metadata (shape, channels, georef) • read a patch from a specified position in the image (saves you from reading an enormous image to your poor RAM) • predict that patch • repeat these two steps until you accumulated all of your desired predictions in a numpy array (same size as the original image given by the image's metadata) • open a new dataset to output the predictions (e.g., a new GeoTIFF file) • copy the metadata from the image to this new dataset (you'll probably want to adjust the number of bands/channels) including the georeference • write the numpy array to this new dataset • done. You successfully generated a fully georeferenced prediction for your image. The only thing left to do is to develop your magic black box predictor 😛

👍 Mikey Tabak
Ben Weinstein (benweinstein2010@gmail.com)
2020-06-30 12:17:19

*Thread Reply:* I just wrote some of this logic for wrapping into tensorflow records for nice fast training. beware large sizes on HDD. Much as @Lukas Liebel alludes to https://github.com/weecology/DeepTreeAttention/blob/6361e4c5eda907d83643cfd789c0bef78be3fe2a/DeepTreeAttention/generators/make_dataset.py#L53

GitHub
👍 Lukas Liebel, Sara Beery, Mikey Tabak
Mikey Tabak (tabakma@gmail.com)
2020-06-30 16:16:34

*Thread Reply:* Thank you both for your responses! I was hoping to use rasterio (@Ben Weinstein it looks like you're using this too) but I haven't been able to get it installed. I'm (unfortunately) using Windows now and the installation is problematic. I'm hoping they'll respond to my question about it on their github page. My general plan is to get large aerial image files, split them up while preserving the georeferencing into chips, have technicians paint masks (for object segmentation) on some of the images, train a model on these annotated images, use the model on non-annotated images, and reassemble the chips into one georeferenced file, so I can have the locations of the objects in the images.

Mikey Tabak (tabakma@gmail.com)
2020-06-30 16:17:19

*Thread Reply:* And @Lukas Liebel, yes I'll be doing this in Python

Lukas Liebel (lukas.liebel@tum.de)
2020-07-02 09:39:08

*Thread Reply:* @Mikey Tabak Well, your plan definitely sounds doable 🙂 In general, GDAL might also help you (either the standalone command line tool or the python package). Especially when splitting the training images into tiles it may be more convenient than re-implementing your own routine in python. Both ways should work with minimal effort though.

Lukas Liebel (lukas.liebel@tum.de)
2020-07-02 09:42:14

*Thread Reply:* @Mikey Tabak Regarding the installation of rasterio: In my experience (on Linux machines) it was always just a matter of a simple 'pip install rasterio' call. I'd suggest to use Anaconda. Chances are rasterio is also available as a conda package and Anaconda also comes with pip on Windows I guess. Another workaround could be using a Docker container. Even though this is kind of an extra mile to go, you may want to use it for training or other future tasks at some point anyway. I can help you with a minimal setup if you want to try that. It's dead simple for most of our daily applications but requires quite some Googling if you're not familiar with the terms and typical problems.

Mikey Tabak (tabakma@gmail.com)
2020-07-07 16:12:52

*Thread Reply:* Thanks @Lukas Liebel I'm still working on getting rasterio installed and nothing is working. I might use R instead for this task. It interfaces really well gdal.

Mikey Tabak (tabakma@gmail.com)
2020-07-07 16:15:00

*Thread Reply:* @Benjamin Kellenberger Do you have an estimate of when your AIDE software will support semantic segmentation? I don't need to use the software to run the AI model, I just want to be able to have technicians "paint" the images and then be able to access the annotations. The software is really impressive for its other functions!

➕ Sara Beery
Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2020-07-08 02:36:43

*Thread Reply:* @Mikey Tabak Thanks a lot for considering our software! 🙂 AIDE in its current version (v2) fully supports semantic segmentation and even has U-Net built-in! Annotation is still a bit manual (without tools like Photoshop’s magic wand built-in), but we’ll be adding those as we move along. You can find a demo instance of it (v1) here. Feel free to let me know if you have more questions, wishes for functionality, or if you find errors.

👍 Mikey Tabak, Sara Beery
Mikey Tabak (tabakma@gmail.com)
2020-07-08 11:06:26

*Thread Reply:* Thanks @Benjamin Kellenberger! I got segmentation running on my project. I wasn't configuring it properly and that's why I didn't see the interface for the segmentation. This is the perfect tool for what I'm trying to do. I have some little questions about it, but I'm going to put these on the github page as others will likely have similar questions.

👍 Benjamin Kellenberger
Ben Weinstein (benweinstein2010@gmail.com)
2020-06-09 12:49:07

To keep the brainstorming + collaboration going (i’m really glad to see that this channel has gotten so active, thanks @Sara Beery and A14Earth for $). If anyone out there is wanting to work with me on hyperspectral image classification. I am creating an open-source implementation of this great paper (Hang et al. 2020 “Hyperspectral Image Classification with Attention Aided CNNs”) here: https://github.com/Weecology/DeepTreeAttention. Only 1 week in. All are welcome. I’m going to recreate their results using open source benchmarks.

GitHub
🌳 Sara Beery, gvanhorn, Jonathan Granskog, Omiros Pantazis
Ben Weinstein (benweinstein2010@gmail.com)
2020-06-09 12:50:01

*Thread Reply:* here is the model architecture

Holger Klinck (hk829@cornell.edu)
2020-06-16 07:16:35

Our Kaggle competition is live. If you have any questions about the competition, please let me know! https://www.kaggle.com/c/birdsong-recognition/overview

kaggle.com
👍 Oisin Mac Aodha, gvanhorn, Sara Beery, Jonathan Granskog, David Rolnick, Ben Weinstein, Omiros Pantazis, Lily Xu
😎 Jon Van Oast
Sara Beery (sbeery@caltech.edu)
2020-06-16 11:15:07

New funding opportunity, passed along by Serge Belongie: _nspires.nasaprs.comexternalsolicitationssummary.do-3FsolId-3D-257bAE8A7AAA-2D4590-2DD9C8-2DFD14-2D2BBC4D7F152C-257d-26path-3D-26method-3Dinit&d=DwMFAw&c=ApwzowJNAKKw3xye91w7BE1XMRKi2LN9kiMk5Csz9Zk&r=nYd0dj1ZW997GVx0s1JQ-6X0VFvJmPQadUA3LTG-wX8&m=9jyagyecXl2boaar5ATRzgNgMve63rWMfRpenvsBYvs&s=vjWMHz1NlCAgthdBr6dqOw4OmycqhzX0B6i6ku5XEMQ&e=|Citizen Science for Earth Systems Program> (CSESP)

🎉 Jon Van Oast
Sara Beery (sbeery@caltech.edu)
2020-06-16 11:15:27

*Thread Reply:* If you have questions, Questions concerning _nspires.nasaprs.comexternalsolicitationssummary.do-3FsolId-3D-257bAE8A7AAA-2D4590-2DD9C8-2DFD14-2D2BBC4D7F152C-257d-26path-3D-26method-3Dinit&d=DwMFAw&c=ApwzowJNAKKw3xye91w7BE1XMRKi2LN9kiMk5Csz9Zk&r=nYd0dj1ZW997GVx0s1JQ-6X0VFvJmPQadUA3LTG-wX8&m=9jyagyecXl2boaar5ATRzgNgMve63rWMfRpenvsBYvs&s=vjWMHz1NlCAgthdBr6dqOw4OmycqhzX0B6i6ku5XEMQ&e=|Citizen Science for Earth Systems Program> may be directed to Kevin Murphy, who may be reached at kevin.j.murphy@nasa.gov and Gerald "Stinger" Guala, who may be reached at gerald.f.guala@nasa.gov.

ROSES-20 Amendment 32: A.41 Citizen Science for Earth Systems Program Final Text and Due Dates Released

The primary goal of the _nspires.nasaprs.comexternalsolicitationssummary.do-3FsolId-3D-257bAE8A7AAA-2D4590-2DD9C8-2DFD14-2D2BBC4D7F152C-257d-26path-3D-26method-3Dinit&d=DwMFAw&c=ApwzowJNAKKw3xye91w7BE1XMRKi2LN9kiMk5Csz9Zk&r=nYd0dj1ZW997GVx0s1JQ-6X0VFvJmPQadUA3LTG-wX8&m=9jyagyecXl2boaar5ATRzgNgMve63rWMfRpenvsBYvs&s=vjWMHz1NlCAgthdBr6dqOw4OmycqhzX0B6i6ku5XEMQ&e=|Citizen Science for Earth Systems Program> (CSESP) is to develop and implement capabilities to augment and enhance NASA scientific data and capacity through voluntary observations, interpretations, or other direct participation by members of the general public to advance understanding of the Earth as a system. The program complements NASA's capability of observing Earth globally from space, air, land, and water by engaging the public in NASA's strategic goals in Earth Science (see https://science.nasa.gov/about-us/science-strategy).

ROSES-2020 Amendment 32 releases final text and due dates for _nspires.nasaprs.comexternalsolicitationssummary.do-3FsolId-3D-257bAE8A7AAA-2D4590-2DD9C8-2DFD14-2D2BBC4D7F152C-257d-26path-3D-26method-3Dinit&d=DwMFAw&c=ApwzowJNAKKw3xye91w7BE1XMRKi2LN9kiMk5Csz9Zk&r=nYd0dj1ZW997GVx0s1JQ-6X0VFvJmPQadUA3LTG-wX8&m=9jyagyecXl2boaar5ATRzgNgMve63rWMfRpenvsBYvs&s=vjWMHz1NlCAgthdBr6dqOw4OmycqhzX0B6i6ku5XEMQ&e=|A.41 Citizen Science for Earth Systems Program>. Mandatory Notices of Intent are due August 4, 2020, and proposals are due September 11, 2020.

On or about June 12, 2020, this Amendment to the NASA Research Announcement "Research Opportunities in Space and Earth Sciences (ROSES) 2020" (NNH20ZDA001N) will be posted on the NASA research opportunity homepage at http://solicitation.nasaprs.com/ROSES2020 and will appear on SARA's ROSES blog at: https://science.nasa.gov/researchers/sara/grant-solicitations/roses-2020/.

Stefan Schneider (sschne01@uoguelph.ca)
2020-06-16 12:00:50

Hi all! Tomorrow, Wednesday June 17th at 1:00pm, I'll be defending my PhD on Deep Learning for Animal Re-ID. The presentation portion is open to the public and all interested individuals are encouraged to attend! It can be accessed at this Zoom link which you can open in any browser:

https://us02web.zoom.us/j/84071151383

The link opens at 12:45pm EST and entry will be denied after 12:55pm EST, so you gotta be on time. Look forward to having anyone that wants to check it out!

🎉 Sara Beery, gvanhorn, Lily Xu, Manish Rai, Malte Pedersen, Siyu Yang
🦁 Sara Beery
Malte Pedersen (mape@create.aau.dk)
2020-06-17 05:15:08

Hi all, I hope it's okay that I advertise a bit for our new multiple object tracking challenge of zebrafish in 3D, which is now live on https://motchallenge.net/data/3D-ZeF20 Behavioral studies of zebrafish can help biologists understand the impact of harmful substances and materials, raising temperatures, and more in our marine environments and by publishing this dataset we hope that we can spark the interest in this relatively unknown field. You can read more on our project page: vap.aau.dk/3d-zef And in our CVPR2020 paper: 3D-ZeF: A 3D Zebrafish Tracking Benchmark Dataset

Cheers 🐟

motchallenge.net
👍 gvanhorn, Sara Beery
🐟 Sara Beery, Megan Cromp
Lily Xu (lily_xu@g.harvard.edu)
2020-06-19 18:06:27

XPRIZE has a $10mil rainforest competition going on: https://www.xprize.org/prizes/rainforest > Rainforests cover less than 10% of the earth's land surface, but they house approximately 50 million inhabitants and over 50% of the planet's biodiversity. Although they are the most biodiverse ecosystems, there is a limited knowledge of everything that lives in these iconic environments. The value of the standing trees are not fully understood and our ability to assess this value is restricted because the rainforest environment is dense, vast, and complex. > > The winning team will develop novel technologies to rapidly and comprehensively survey rainforest biodiversity and use that data to improve our understanding of this complex ecosystem.

XPRIZE
🌴 Sara Beery, Omiros Pantazis, David, Ankita Shukla
Hannah Kerner (hkerner@umd.edu)
2020-06-24 16:56:18

Hi all, we are looking for a postdoc in the NASA Harvest group at UMD College Park. Research topic is ML and EO applications for cropland & crop production forecasting in smallholder agriculture (field to national scales) focusing on Sub-saharan Africa. Please apply or share with potential applicants! Thank you! https://ejobs.umd.edu/postings/78351

ejobs.umd.edu
🌍 Sara Beery, Lily Xu, gvanhorn, Laurel Hopkins
Sara Beery (sbeery@caltech.edu)
2020-06-24 17:04:44

*Thread Reply:* @Caleb Robinson, @Elijah Cole (Deactivated) you guys might know someone?

Caleb Robinson (calebrob6@gmail.com)
2020-06-24 17:54:54

*Thread Reply:* Hi @Hannah Kerner! I don't know of anyone with an EO background looking for postdoc unfortunately

Caleb Robinson (calebrob6@gmail.com)
2020-06-24 17:55:39

*Thread Reply:* I do know someone looking to apply to a CS PhD program

👍 Sara Beery
Caleb Robinson (calebrob6@gmail.com)
2020-06-24 17:56:26

*Thread Reply:* I'll check back with them in 4-6 years 👍

😅 Sara Beery, Hannah Kerner
Hannah Kerner (hkerner@umd.edu)
2020-06-25 09:22:28

*Thread Reply:* Candidates can also have an ML background but no EO background yet, as long as that's a direction they're interested in!

Ben Weinstein (benweinstein2010@gmail.com)
2020-06-28 10:47:34

*Thread Reply:* @Hannah Kerner this just popped up on my scholar feed if useful (https://www.frontiersin.org/articles/10.3389/fenvs.2020.00085/full)

Frontiers
Hannah Kerner (hkerner@umd.edu)
2020-06-29 09:00:27

*Thread Reply:* thanks for sharing, definitely relevant!

Sara Beery (sbeery@caltech.edu)
2020-06-25 10:13:52

I'm giving a WILDLABS "Tech Tutorial" this morning (8am PST) on getting started using ML for camera traps. It's geared more towards an ecology/non-ML audience so I didn't think to post it here, but if anyone is interested you may still be able to register! https://www.eventbrite.co.uk/e/tech-tutors-how-do-i-get-started-using-ml-for-my-camera-traps-tickets-109658752280?ref=estw

If you can't register, the recording will be posted on YouTube tomorrow :)

Eventbrite
👍 Oisin Mac Aodha, Nathaniel Rindlaub, Elijah Cole (Deactivated), Lily Xu
Tee R. (taylornroberts2009@gmail.com)
2020-06-25 13:12:29

Hey all! Newbie here, wanting to learn more about wildlife and AI!

❤️ Sara Beery
👍 Oisin Mac Aodha, Benjamin Kellenberger
👋 Jon Van Oast, Siyu Yang
Sara Beery (sbeery@caltech.edu)
2020-06-25 13:12:57

*Thread Reply:* Welcome!

Jon Van Oast (jon@wildme.org)
2020-06-25 14:30:25

*Thread Reply:* glad to have you here

Sara Beery (sbeery@caltech.edu)
2020-06-26 13:46:51

Context R-CNN is on the Google AI blog today! https://ai.googleblog.com/2020/06/leveraging-temporal-context-for-object.html

Google AI Blog
👍 gvanhorn, Benjamin Kellenberger, Elizabeth Bondi, Hannah Kerner, Zac Winzurk, Chris Yeh
😍 Chris Yeh, Jon Van Oast
Drew Gray (drewjgray@gmail.com)
2020-06-30 18:12:22

Hey everyone! I just joined, thanks for the invite @Sara Beery. I have just started a non-profit called Aqualink. We are building a global ocean monitoring system with connected buoys measuring the temperature at coral reefs. A few of our flagship sites will have live streaming camera.. coral cams 🙂 Once we are up and running we are looking to build some models for coral reef health analysis through computer vision and machine learning. Let me know if you would like to learn more or get involved! We will be open-sourcing all of our data and models as well.

🐚 Sara Beery, Lily Xu, David Healey, Elizabeth Bondi
🐙 Lukas Liebel, Tee R.
👍 Carly Batist
Ben Weinstein (benweinstein2010@gmail.com)
2020-06-30 20:43:26

*Thread Reply:* Hi @Drew Gray, perhaps you know them already but https://www.nature.com/articles/srep23166 the scripps, san diego groups did alot of work here in the last decade, it would be good to hear from them.

Scientific Reports
👍 Sara Beery
Drew Gray (drewjgray@gmail.com)
2020-07-01 12:04:42

*Thread Reply:* Thanks @Ben Weinstein!

Bob Zak (robert.zak@comcast.net)
2020-07-01 19:42:51

Hi, all. Another new join -- thanks @Sara Beery ^2. Post retirement, I'm looking at an "encore" career in outreach and education for wildlife conservation based on trail cameras with my wife. Nothing too big, yet -- a blog and Janet's book -- but with with aspirations for using AI technology to improve usefulness of cameras in capturing compelling photos and videos of wildlife behavior.

🦊 Sara Beery, Elizabeth Bondi, Lukas Liebel, Omiros Pantazis, Jonathan Granskog
👍 Alasdair Davies
Sara Beery (sbeery@caltech.edu)
2020-07-01 19:57:05

*Thread Reply:* @Sam Kelly I talked to Bob earlier, and he's interested in edge-based solutions. You two should talk!

Bob Zak (robert.zak@comcast.net)
2020-07-01 20:10:43

*Thread Reply:* I am. I emailed @Sara Beery about porting Megadetector to TFLite for embedded SOCs -- something that appears a overly ambitious for the current state of the art -- still, we have what's left of Moore's Law 🙂 Another area I've thought about is whether we can improve the PIR sensor accuracy using AI -- perhaps leveraging new Si aimed at "always on " audio processing. Very interested to hear what you are up to.

Sara Beery (sbeery@caltech.edu)
2020-07-01 20:12:56

*Thread Reply:* I think lightweight detection is totally possible and even useful! It's just not as accurate, which is why we kept the MegaDetector a larger model.

Bob Zak (robert.zak@comcast.net)
2020-07-01 20:15:49

*Thread Reply:* Oops -- I misunderstood. I need to understand accuracy tradeoffs more. In app space I have in mind, camera location independence is key. In any case, I should start small to limit the amount of trouble I get into

Sara Beery (sbeery@caltech.edu)
2020-07-01 20:17:22

*Thread Reply:* I think the PIR sensor accuracy seems really interesting too!

Sam Kelly (sam@conservationxlabs.org)
2020-07-02 10:47:33

*Thread Reply:* Hi @Bob Zak! Would love to chat to you about this! We have been thinking about some of this for a while at Conservation X Labs! Definitely some things I would love to discuss with you!

❤️ Sara Beery
Bob Zak (robert.zak@comcast.net)
2020-07-02 12:56:59

*Thread Reply:* @Sam Kelly Great! I just updated my profile with a skype address; (updated) phone in profile also works. I'm around now through the afternoon.

Jonathan Granskog (jonathan.granskog@gmail.com)
2020-07-06 14:28:57

*Thread Reply:* I love this idea and it’s something I’ve been thinking about too. Maybe you could look into using NVIDIA’s Jetson Nano or Xavier NX as well.

Bob Zak (robert.zak@comcast.net)
2020-07-06 15:40:45

*Thread Reply:* Yeah -- there seem like there might be a lot of hardware options. But (as a newb to the details) I have basic questions about MD model size; extent that it's been optimized to reduce said size; tolerance to thinks like int8 mapping; etc. I guess another option would be to start with same training set, but target a smaller (more edge friendly) model to start with. So many questions...

👍 Sam Kelly, Jonathan Granskog
Sam Kelly (sam@conservationxlabs.org)
2020-07-06 15:51:45

*Thread Reply:* lots of awesome innovation happening in this space - On the hardware front: https://github.com/basicmi/AI-Chip We are attempting what you are talking about with a YOLO-tiny / Darknet (or similar) base model with limited classes and transfer learning approach with pruning/quantization included. Will be sure to share how it goes

GitHub
Bob Zak (robert.zak@comcast.net)
2020-07-06 16:28:19

*Thread Reply:* So someone has already concluded that running current Inception-Resnetv2 MD (tuned, optimized, int8, etc.) doesn't work on one of these edge devices?

Ed Miller (ed@hypraptive.com)
2020-08-01 19:55:21

*Thread Reply:* @Bob Zak and @Sam Kelly I am also interested in intelligent camera traps. There are SoCs that can run high accuracy models and those the can run really low power. The trick is to find the middle ground that makes sense. You can run Inception-Resnetv2 at the edge, but the power might be more than you bargin for. But that also depends on how frequently you run a full inference.

Bob Zak (robert.zak@comcast.net)
2020-08-01 21:00:05

*Thread Reply:* On further investigation, and after conversation with @Sam Kelly, I'm convinced that the "intelligent camera" needs to start somewhere other than Megadetector. MD is solving a much harder problem -- locating any animal in any arbitrary trail camera photo, with no a priori context. Vs. at the edge, you have all sorts of context -- e.g. you know what the (animal-less) scene is, under any lighting condition. Intuitively, it seems like this should really simplify the detection problem. Which I think is critical -- even with low power hardware, my sense is that deep networks like MD are orders of magnitude more energy intensive than can be supported on batteries. Even worse, the most interesting edge camera application (I think) is likely video, and here you need to keep up with frame rate.

Ed Miller (ed@hypraptive.com)
2020-08-02 21:55:24

*Thread Reply:* I agree MD, as it is, is too heavy for frame by frame detection at the edge. I would think you would have a cascade of models, starting with "something is out there", perhaps using motion detector or "smart PIR" as you mentioned in another thread. The fixed context of the camera, as you mentioned, should help with verifying there is an object of interest without having to run a complex model. Once you have a region of interest, you can run though a further cascade of models as needed:

  1. You may want to classify at a coarse level, similar to MD: animal/person/vehicle. This would be beneficial if the MD-"lite" model is lighter weight than the species classifier, or if you want fast alerts for human/vehicle hits (possible poachers, etc.).
  2. If you want species level classification, you can have models trained on the "common" species for the camera area.
  3. If you want individual classification (like we do for the BearID Project) you only need to run when there's a species match. You shouldn't need to classify every frame at each level. You can use some computer vision based object tracking algorithms (again the known background should make this easier) to follow the object through frames and only run the classifier on a few frames to make sure there is agreement. You could also run some of the models with lowered resolution to save cycles/energy. Also, not everything has to run "live".

You probably don't need the bits that say something interesting is there to run on every fame. Maybe you run a few at the beginning to make sure we want to record this, then rely on the object tracking to keep the keep recording until the object leaves the scene. You can run some of the other models in the background from the recording when nothing else is happening.

Any thoughts on a data backchannel such as LoRA or DASH7?

Bob Zak (robert.zak@comcast.net)
2020-08-03 07:38:41

*Thread Reply:* A hierarchy of models definitely makes sense (although, depending on the size of each of the models, the energy cost of swapping the models around could be significant). In addition to the usual ai worries (training, accuracy, size, etc), I think you also have to worry about “value proposition” for the usage model relative to other solutions, especially considering energy costs. E.g., storage capacity, and energy per image is relatively cheap — if you can afford to wait for results, then doing all the processing offline makes a lot of sense. If you need the data urgently (e.g. for poaching detection, or “real-time” animal localization) then you have to consider the the alternative of sending thumbnail images via radio for backend processing — or, more generally, which models in the hierarchy to run locally, vs which to run in the backend. Two overall pieces of scaffolding would be good to have: an “energy budget” model (positing energy costs for acquisition, storage, processing, and wireless rx/tx). The other would be a list of usage cases and their key figures of merit. I’ve not looked at Lora or dash7, though I did poke around at some of the satellite startups. I also looked at rficient (https://www.iis.fraunhofer.de/content/dam/iis/en/doc/il/ics/ic-design/Datenblaetter/FactsheetWakeUpv4.pdf) as a potential technology for remote sensors (promising, still waiting on sample availability)

👍 Sam Kelly, Ed Miller, Ben Weinstein, Sara Beery
Sam Kelly (sam@conservationxlabs.org)
2020-08-03 11:02:12

*Thread Reply:* @Bob Zak after our discussion, we actually tried to compress MD for on-device deployment, and unfortunately it is not technically possible with the current available libraries (some of the CNN operations are not available for tensorflow lite). The Yolo algorithms and EfficientNet seem to be the go-tos in this area. On the comms/backhaul - would be very interested to hear about this, and the animal tracking/tag world may be a good analogy - LTE/LoRa/Iridium/ARGOS/VHF. I know @Alasdair Davies has a module for Argos that I'm interested in. @Ed Miller 100% agree with the cascade idea and keeping algorithms tailored to predicted (or study related) classes. I would be very interested to hear experts thoughts on MD-lite (animal/no animal) vs robust study-specific species detectors.

😎 Ed Miller
Alasdair Davies (alasdair@shuttleworthfoundation.org)
2020-08-03 11:30:49

*Thread Reply:* @Bob Zak & @Sam Kelly @Ed Miller - super interesting thread here. I think we are all in the same realm of thought regarding what energy is used in the field on the edge, suitable models and realistic constraints. Backhaul to send data (photos or otherwise) to the cloud for MD continues to interest me. I have done it over Iridium rudics (slowly) with thumbnails and ZSL are working on Instant Detect 2 to advance this (gateway has the modem, lower power sensors send to the gateway for processing in the field). Currently looking at an EfficentDet model for edge thermal image detection with Leptons for an initial detection to wake something beefier (still edge) only if it's "worth" waking in the camera space to get more confidence and to spend more power

😎 Ed Miller, Sara Beery
Bob Zak (robert.zak@comcast.net)
2020-08-03 13:12:12

*Thread Reply:* A lot to unpack, there. Has anyone looked at the impact of resolution and spectral content on animal target detection? E.g. MD is severely downsampled from the sensor data, but still ~300x300 rgb. At the other extreme, the PIR sensor is a effectively single pixel (maybe two, depending on how you think of it) in a single wavelength.

Bob Zak (robert.zak@comcast.net)
2020-08-03 13:13:25

*Thread Reply:* https://openaccess.thecvf.com/content_cvpr_2017/papers/Huang_SpeedAccuracy_Trade-Offs_for_CVPR_2017_paper.pdf

Looks at (among many other salient options) the impact of image resolution -- testing 300x300 and 600x600. They find: > The effect of adjusting image size. It has been observed by other > authors that input resolution can significantly impact detection > accuracy. From our experiments, we observe that decreasing resolution > by a factor of two in both dimensions consistently lowers accuracy (by > 15.88% on average) but also reduces inference time by a relative > factor of 27.4% on average. One reason for this effect is that high > resolution inputs allow for small objects to be resolved. Figure 4(b), > which compares detector performance on large objects against that on > small objects, confirms that high resolution models lead to > significantly better mAP results on small objects (by a factor of 2 in > many cases) and somewhat better mAP results on large objects as > well. We also see that strong performance on small objects implies > strong performance on large objects in our models, (but not vice-versa > as SSD models do well on large objects but not small). I don't believe this paper explores the effect of narrowing the spectral input (e.g. of processing a black and white image). Is MD more accurate on daytime (color) photos, than on nightime, IR illuminated B&W photos?

Sara Beery (sbeery@caltech.edu)
2020-08-03 13:15:44

*Thread Reply:* I've looked at the effect of performance vs input image resolution, and it doesn't change performance on big animals much but drastically effects performance on small animals or animals that are farther away (totally intuitive, you're shrinking an already-small object so it loses information). Also, MDv4 takes in much bigger than 300x300, it uses a keep-aspect-ratio resizer with a min dimension of 600

Sara Beery (sbeery@caltech.edu)
2020-08-03 13:17:24

*Thread Reply:* performance on day vs night depends on species, there's a bigger effect on performance when you see a species outside of it's normal time (ie most raccoon images are taken at night, so it struggles with raccoons during the day)

Ed Miller (ed@hypraptive.com)
2020-08-03 16:41:58

*Thread Reply:* Anecdotally, I have seen similar performance even with downscaling with large animals (as noted by Sara). I would also point out that if the downscale catches the animal most of the time, then you could use it by default, any only run a larger image occasionally for sanity check (or if you really think something should be there, but the downscaled version didn't find it).

Sara Beery (sbeery@caltech.edu)
2020-08-03 16:46:33

*Thread Reply:* To some extent the real question is what are you trying to do on the edge. If you're trying to detect elephants it's a very different question then trying to detect rodents using an off-the-shelf camera with a PIR sensor optimized for large mammals. I think most small mammals don't trigger the camera at all, and then if they do they're harder to detect using CV because they're small and camouflaged. I think edge systems will need to be open-source and adaptable so that the elephant/bear/big cat/large mammal folks and the rodent/bird/herp folks can both optimize them for their use cases.

👍 Ed Miller, Alasdair Davies
Bob Zak (robert.zak@comcast.net)
2020-08-04 12:02:15

*Thread Reply:* (Back to B&W performance) I guess the real test would be train a single channel version of MD on a dataset of photos converted to B&W. I wouldn't expect it to improve accuracy 🙂, but I bet it would be a net win "accuracy per Joule". Similar for reducing the precision of each channel. Or, more generally -- within a fixed computation budget, what is most efficient allocation of resources towards: model depth; number (and peak special sensitivity) of color channels; and precision/encoding of each channel? (I'll look, but if someone already knows) Are there references that cover this topic?

Alasdair Davies (alasdair@shuttleworthfoundation.org)
2020-08-04 14:24:34

*Thread Reply:* Did someone mention elephants 🐘 @Sara Beery 😄

🐘 Sara Beery, Elizabeth Bondi, Oisin Mac Aodha
❤️ Sara Beery, Bob Zak
😎 Ed Miller
Oisin Mac Aodha (macaodha@caltech.edu)
2020-08-06 18:03:54

*Thread Reply:* Increasing image resolution helps classification performance on the iNaturalist 2017 dataset: https://arxiv.org/pdf/1806.06193.pdf (Table 3. this is not a detection task)

👍 Sara Beery
Oisin Mac Aodha (macaodha@caltech.edu)
2020-08-06 18:07:07

*Thread Reply:* @Bob Zak Maybe I'm not fully understanding the B&W model you suggest, but isn't there a danger that any savings from reducing from a 3 channel to 1 channel input will be dwarfed by all the later computation which will not be impacted by changing the input dimensionality.

Bob Zak (robert.zak@comcast.net)
2020-08-06 20:03:00

*Thread Reply:* Yeah -- good point. My ignorance of model structure and resource allocation showing 😞 Certainly, the deeper the model, the less important the size of the input layer. Thanks for pointer to iNat paper!

Ed Miller (ed@hypraptive.com)
2020-08-06 20:34:48

*Thread Reply:* @Oisin Mac Aodha Thanks for the pointer to the FGVC paper. The paper is focused on the classification step, and only considered resolutions from 299x299 (89k pixels) to 560x560 (314k pixels). The 4x image size difference will have some compute/energy impact, but it does also provide a 15% improvement on the Top-1 error (29.93% -> 25.37%). This could be very useful for the BearID Project face classifier, as we currently only use a 150x150 pixel face image.

Some of the discussion earlier on this thread was referring to the image resolution for an Object Detection network. For example the BearID Project's bear face detector. We have tried this on input images from 15M pixels down to 307k pixels. My completely unquantified observation shows a huge reduction in compute with little effect on accuracy (at least for my single class detector). Perhaps we could run some sweeps to quantify this.

@Sara Beery and @Siyu Yang: has anyone run compute vs accuracy comparisons for Mega Detector?

Has anyone run such comparisons for other object detectors with more classes (like those trained on COCO, etc.)?

Oisin Mac Aodha (macaodha@caltech.edu)
2020-08-07 04:36:27

*Thread Reply:* Maybe one pointer would be the EfficientDet paper which explores models with different input resolution (but note that the model size is changing too). https://arxiv.org/abs/1911.09070

😎 Ed Miller
Bob Zak (robert.zak@comcast.net)
2020-07-01 19:55:44

(yikes -- slack newb -- was looking for something more subtle with these links 😞 )

Sara Beery (sbeery@caltech.edu)
2020-07-01 19:56:22

*Thread Reply:* No worries!

Sara Beery (sbeery@caltech.edu)
2020-07-07 20:22:53

New job posting for a data scientist at a startup working on carbon removal & climate solutions: https://apply.workable.com/carbonplan/j/E6E55611B9/

apply.workable.com
👍 Jonathan Granskog, Suzanne Stathatos, Siyu Yang
Sara Beery (sbeery@caltech.edu)
2020-07-08 15:17:41

WILDLABS is running their 3rd annual Conservation Tech Survey! Let's make sure that CVML researchers have a voice in the survey: https://mailchi.mp/wildlabs/community-survey-2020?e=7b9d646075

mailchi.mp
👍 Carly Batist, gvanhorn, Elizabeth Bondi, Talia Speaker
Jędrzej Świeżewski (jedrzej@appsilon.com)
2020-07-10 04:02:42

Hi all 👋, glad to be joining. At Appsilon we have an AI for Good initiative, which led me into the conservation space. Curious to see what you are up to and will be glad to share our results here.

Appsilon Data Science | End­ to­ End Data Science Solutions
🌍 Sara Beery, Thijs, Björn Lütjens
Carly Batist (cbatist@gradcenter.cuny.edu)
2020-07-10 08:22:51

*Thread Reply:* Welcome! Loved reading the blogs about your work with camera traps in Gabon. Excited to see what comes it. Have you worked at all with passive acoustic devices/data? I am currently trying to figure out the best way to detect lemur calls in my continuous audio recordings from Madagascar.

Jędrzej Świeżewski (jedrzej@appsilon.com)
2020-07-13 03:27:14

*Thread Reply:* Hi Carly, I don't have much experience with the analysis of acoustic data, but would be interested to read up on it. Can you recommend papers/blogs?

Carly Batist (cbatist@gradcenter.cuny.edu)
2020-07-13 12:23:49

*Thread Reply:* I can definitely go down the rabbit hole with that, but let me pull together some of the reviews and notable case studies, at least for terrestrial use. The marine mammal community uses acoustic monitoring via hydrophones extensively, but I am more familiar with the literature for terrestrial animals like my study species

Carly Batist (cbatist@gradcenter.cuny.edu)
2020-07-13 12:38:27

*Thread Reply:* Marques et al (2013)-Estimating animal population density using passive acoustics. Heinicke et al (2015)-Assessing the performance of a semi-automated acoustic monitoring system for primates. Sánchez-Gendriz et al (2017)-A methodology for analyzing biological choruses from long-term passive acoustic monitoring in natural areas. Gibb et al (2018)-Emerging opportunities and challenges for passive acoustics in ecological assessment and monitoring. Kershenbaum et al (2019)-Tracking cryptic animals using acoustic multilateration: A system for long-range wolf detection. Wood et al (2019): Detecting small changes in populations at landscape scales: A bioacoustic site-occupancy framework. Sethi et al (2020): Characterizing soundscapes across diverse ecosystems using a universal acoustic feature set. Kvsn et al (2020): Bioacoustics data analysis-A taxonomy, survey and open challenges

👏 Sara Beery, Thijs, Jędrzej Świeżewski
Jędrzej Świeżewski (jedrzej@appsilon.com)
2020-07-14 04:08:44

*Thread Reply:* Thanks @Carly Batist! That is a helpful compilation!

Thijs (thijs@q42.nl)
2020-07-13 03:14:24

Hi all! I work at https://hack-the-planet.io/ where we are working on creating a AI-camera-trap addon, that allows for realtime classification of camera trap images. Results are pushed to a backend via a satellite uplink 📡

This smart-camera-trap project is part of our https://www.hackthepoacher.com/ initiative. We are also building a GSM detection system to track down poachers.

Thijs (thijs@q42.nl)
2020-07-13 03:15:06

I'm working with @Jędrzej Świeżewski on this project en we are planning to rollout prototypes this year. If anyone has thoughts about this project (especially about inferencing at the edge) please let me know, open for suggestions! 🙌

👍 Ankita Shukla, Siyu Yang, Jonathan Granskog, Jędrzej Świeżewski, gvanhorn, Elizabeth Bondi, Carly Batist, Ed Miller, Lloyd Hughes
Ed Miller (ed@hypraptive.com)
2020-08-01 19:58:53

*Thread Reply:* Have you already selected your hardware platform?

Tee R. (taylornroberts2009@gmail.com)
2020-07-13 21:11:45

Hey all, does anyone have any advice for someone looking to get into the field? I got a bachelor's in history about ten years ago, and have been self learning AI and ML. I would love to be able to use those skills for environmental/natural sciences. Do I need coursework in biology and things like that? Math? (Which I admit is not my strong suit.) Any advice would be much appreciated!.

Sara Beery (sbeery@caltech.edu)
2020-07-14 11:07:41

*Thread Reply:* I'm sure there are many ways to get started, but I might recommend reaching out to a local conservation group and offer your help? Many small conservation orgs don't have the bandwidth or resources to work with cutting edge tech, so it could be a nice way to get your hands dirty on a real problem.

👍 Björn Lütjens
Tee R. (taylornroberts2009@gmail.com)
2020-07-14 13:20:50

*Thread Reply:* okay!

Jonathan Granskog (jonathan.granskog@gmail.com)
2020-07-14 11:47:10

This seems pretty cool: https://www.kickstarter.com/projects/opencv/opencv-ai-kit

Kickstarter
🙌 Sara Beery
👍 Ștefan Istrate
John Payne (drjohnpayne@gmail.com)
2020-07-14 16:24:01

I’m wondering how others have dealt with the problem of inevitable category creep: you examine 100,000 photos and discover that you have a new category that you hadn’t included when you started training the model, 200 hours ago. It would be nice not to have to retrain the model from start. I can imagine you might a) create some lumped categories (“other”, “other_animal”, whatever) in the beginning to anticipate that, b) or perhaps even add layers to the CNN…but that could get complicated with a complex R-CNN model with ROI heads that also need to be trained…

Sara Beery (sbeery@caltech.edu)
2020-07-14 16:26:16

*Thread Reply:* Definitely tricky. One option (of many) would just be to add the category and fine-tune using your previous model as a starting point? It might converge faster.

John Payne (drjohnpayne@gmail.com)
2020-07-14 16:27:08

*Thread Reply:* Yes…so I guess the trick would be getting the weights from the smaller model to load correctly.

Sara Beery (sbeery@caltech.edu)
2020-07-14 16:27:47

*Thread Reply:* just load everything except the final layer(s) where you map to the categories?

John Payne (drjohnpayne@gmail.com)
2020-07-14 16:30:27

*Thread Reply:* It’s been a while since I looked at the code for loading weights and I can’t remember how hard it would be to just load a subset of the layers, but I think that approach makes sense. Thanks.

Sara Beery (sbeery@caltech.edu)
2020-07-14 16:32:52

*Thread Reply:* Usually you can exclude layers by name

John Payne (drjohnpayne@gmail.com)
2020-07-14 16:36:28

*Thread Reply:* Are you referring to Pytorch, I hope? Also, different topic but since I have your attention for a moment, is the code for your Context R-CNN model online yet? I’m really interested in that — such a smart idea.

Sara Beery (sbeery@caltech.edu)
2020-07-14 16:38:53

*Thread Reply:* I believe tf and pytorch allow for excluding layers while loading. Context R-CNN is online in the tensorflow object detection API (sorry, this one's not pytorch!) https://github.com/tensorflow/models/blob/master/research/object_detection/README.md#context-r-cnn

GitHub
John Payne (drjohnpayne@gmail.com)
2020-07-14 16:39:52

*Thread Reply:* Great, thanks for your help.

👍 Sara Beery
Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2020-07-15 02:32:48

*Thread Reply:* I just recently investigated this. There are not many solutions, mainly because training a CNN with only few examples (there are not many once a new class needs to be added) does not really work. Suggestions like few-shot and meta learning don’t work either, as they rely on many new classes/tasks with few examples.

The most recent version of AIDE solves this as follows: every time a new class is discovered, the model adds new output neurons to the classification head (of RetinaNet, ResNet, etc.) and initializes their weights by copying over some other classes’ values. This way, although the initial performance on the new class is not great, it does not alter the accuracy of the other classes at all. Thanks to active learning, the accuracy will improve over time, though. I have ideas on how to improve this mode in future releases, but for the time being, the software does support adding new classes on-the-fly, at least for the built-in models.

❤️ Sara Beery
John Payne (drjohnpayne@gmail.com)
2020-07-17 00:42:57

*Thread Reply:* Thanks Beni. I’m not sure what you mean by “adding new output neurons,” but I can imagine two ways to add a new class: Method 1) Add a new linear layer; for example if the current output layer dimensions are (bs x c) where bs is batch size and c is number of classes, then you add a new linear layer sized (c x c+1) on top of it, making your final output (bs x c+1). That means your model grows each time you add new classes. Method 2, which I think is what Sara was suggesting, is to replace the final layer (bs x c) with a new layer (bs x c+1) and simply re-learn the weights of that final layer. Out of curiosity, am I correct that you are using Method 1 in AIDE? To me, Method 2 seems a little easier to think about, but I can’t think of any major reason to prefer one over the other.

Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2020-07-17 02:25:09

*Thread Reply:* Hi John, What I meant is the following: the final layer of a CNN maps from D to C, where D is the number of features of the penultimate layer (e.g., 512 for ResNet-18), and C is the number of classes. Effectively, this last layer contains weights of size DxC. Whenever there is a new class present, the models in AIDE expand this matrix and append a Dx1 vector (for 1 class) to it, which is initialized from a combination of existing weights. Effectively, there is no layer replacement or additional layer to add, but instead the final classification layer gets expanded to Dx(C+1). The advantage is that the weights for the existing classes can be retained, and the new class added without it confusing the model. I hope this helps!

John Payne (drjohnpayne@gmail.com)
2020-07-17 15:53:09

*Thread Reply:* OK thanks Beni; yes that’s what I was referring to as Method 2 except that you are also transferring the old weights. Makes sense.

Sara Beery (sbeery@caltech.edu)
2020-07-21 18:29:09

*Thread Reply:* Hey @Benjamin Kellenberger do you have any direct comparisons of this initialization vs random init when you increase the output layer? It makes total sense, and I'm curious if you get increased accuracy, increased training efficiency, or both? And when you do this do you retrain over your whole training set + the new examples? Do you freeze everything except the last layer or allow all weights to be updated?

Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2020-07-22 14:09:09

*Thread Reply:* Hi @Sara Beery I have only run very quick tests, nothing substantial yet. I just trained a classification model (ResNet-18) on a remote sensing dataset with half of the classes (and images) removed. I then added the output neurons for a variable number of new classes and checked performance. Nothing was frozen; I always trained the full model end-to-end. With random initialization, the accuracy of the entire model went to zero; it basically put the whole model out of balance. Adding weights by copying other classes’ values resulted in still perfect accuracy for the trained classes, and (obviously) chance guess for the new class. I had to re-train on the new class for about the same number of epochs again as the initial model to get it up to speed.

I originally wanted to investigate this further and try out some more intelligent initialization schemes, but then dropped the idea. I might still do it just for AIDE.

Sara Beery (sbeery@caltech.edu)
2020-07-22 14:12:10

*Thread Reply:* Gotcha. So to get good performance on the new classes you still need to train about the same amount of time over all the data. Thats kinda what I expected, but I agree it would be interesting to test further. Essentially this sounds ideal specifically for an active learning scenario? Where you'd still want the model to work "in the interim" while you're adding new training data or training on new data, before training is complete.

Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2020-07-22 14:14:03

*Thread Reply:* Precisely. People have tried to address this with meta learning, attention models, etc., all of which sounds like an overkill or has hidden costs (meta learning requires many tasks with few examples). It would basically solve the one major problem I have with AIDE in that it allows getting models to provide somewhat decent predictions at the start instead of randomness.

Sara Beery (sbeery@caltech.edu)
2020-07-22 14:21:17

*Thread Reply:* Makes total sense! Probably if you plan to retrain completely before eval though you can just take the simple path of increasing the final layer and randomly initializing all of it? If it gets to the same accuracy in the same amount of time. Just because it's easier to code up 🙂

John Payne (drjohnpayne@gmail.com)
2020-07-22 15:10:23

*Thread Reply:* @Benjamin Kellenberger I would think that you could reduce your training time for new layers by progressively un-freezing the model. As you probably know, the standard approach in transfer learning is to start with all but the new top layer frozen (sometimes BatchNorm layers should also be unfrozen), and gradually unfreeze further and further down as the training progresses. The reason for that is that the lower-level layers in a CNN are doing simple things like identifying edges and curves, …and somewhat higher up, layers are identifying shapes, textures and so on, all of which are likely to still be useful for feeding into the topmost layers that are specific to your particular classification problem. But maybe I misunderstood when you said “Nothing was frozen; I always trained the full model end-to-end.”

Sara Beery (sbeery@caltech.edu)
2020-07-22 15:30:18

*Thread Reply:* I've not really found it to make much difference in practice to unfreeze all right away or unfreeze them progressively.

Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2020-07-23 06:45:19

*Thread Reply:* Same here to be honest; I found the learning signal from the few new examples to be so little that there was hardly any change to the early layers. Batch norm is another story… I had terrible experience with it and since then always replace it with instance norm. Yielded the same result and is much more stable.

John Payne (drjohnpayne@gmail.com)
2020-07-23 16:44:55

*Thread Reply:* Interesting observations, thanks. I guess I had just accepted the orthodoxy without testing it.

John Brandt (John.Brandt@wri.org)
2020-07-24 09:22:07

Hi all -- I'm a data scientist at the World Resources Institute (https://www.wri.org) where I work on a number of deep learning / AI projects. Currently working on remote sensing change detection for tree planting and other restoration projects. Excited to see what everyone else in this channel is working on!

World Resources Institute
👍 Oisin Mac Aodha, Sara Beery, Tee R., Carly Batist, Björn Lütjens
😎 Jon Van Oast
Ben Weinstein (benweinstein2010@gmail.com)
2020-07-24 09:28:34

*Thread Reply:* Welcome @John Brandt, @Hannah Kerner I think you and John have alot to discuss on agricultural landcover mapping in sub-Saharan africa, also @Tony Chang on multiple temporal fusion of satellite data for tree cover estimation.

🎉 Hannah Kerner, Sara Beery
Hannah Kerner (hkerner@umd.edu)
2020-07-24 09:30:23

*Thread Reply:* Yes! @John Brandt I read your paper recently about crop type classification with attention, would be great to talk more about that

John Brandt (John.Brandt@wri.org)
2020-07-24 10:10:08

*Thread Reply:* Hi @Hannah Kerner - Ben told me great things about your work. Nothing ever came of the attention paper for crop type classification, though I'd love to revisit it. I'm currently working on doing trees outside of forests monitoring with Sentinel. We're working in Ghana, Niger, Cameroon, and Malawi to develop jurisdictional maps on agroforestry and restoration implementation, and (separately) working with local governments across south and Central America to develop monitoring systems for restoration implementation. The preprint of the paper is currently updating to a new version on arxiv and hasn't been announced (since it had R&R), so I've attached it here. Would love to connect about your work and see if there is any overlap (since we're primarily focused on ag landscapes!)

John Brandt (John.Brandt@wri.org)
2020-07-24 10:11:12

*Thread Reply:* @Tony Chang similarly interested in chatting with you! We are doing multi temporal image fusion to help mitigate changes in growing season / leafing seasons so that we can use one global model, would love to chat

Tony Chang (tony@csp-inc.org)
2020-07-31 22:59:23

*Thread Reply:* Definitely John! Welcome to the group! Yeah, the global model challenge is one research that I'm exploring at the moment. Pretty interesting question regarding change detection too. Please feel free to connect with me. My email is tony@csp-inc.org

Ben Weinstein (benweinstein2010@gmail.com)
2020-07-24 16:50:53

I’m wondering if anyone with more formal knowledge on the math of backpropagation/optimization can give me some insight. I am recreating a deep hyperspectral CNN with weak spectral attention. Architecture looks like this. The authors make a point of training the two arms of the network independently. Within each arm, each attention layer gets a softmax layer and then they sum the cross-entropy loss weighted by the depth of the CNN. Shallower layers are down-weighted. Fine, i get that. Then they train both arms together using standard cross-entropy. What is the virtue of this two step strategy? Why not train end-to-end and sum all losses at the same time? I’ve coded it and it provides a modest boost (2% val acc), but it takes a long time to train, and its cumbersome. Repo is here: https://github.com/weecology/DeepTreeAttention , paper is here: https://arxiv.org/abs/2005.11977

John Brandt (John.Brandt@wri.org)
2020-07-27 10:04:55

*Thread Reply:* This is really really interesting. My expectation would be that the virtue is similar to that derived from stochastic weight averaging (SWA) (https://arxiv.org/pdf/1803.05407.pdf) where the weights are frozen after epochs near the end of training, and the prediction is the average of the prediction of these late-stage epochs. The authors in the above article discuss in depth how averaging these good predictions results in an even better prediction, based on theoretical analyses of what the cross-entropy loss surface should look like. Based on this, starting with a summation of the two arms being already trained is somewhat similar to SWA where combining the two trained weights identifies a lower optima. The benefit of the paper you linked is that this lower optima is then the start of a new training cycle, where there are new patterns to learn between the overall network, that can then even find lower optima. I would expect that if you train the overall network end-to-end, you don't get the benefit of finding two optima, combining them to find a new even lower optima, and then having new spaces to explore

👍 Ben Weinstein
Ben Weinstein (benweinstein2010@gmail.com)
2020-07-27 10:55:32

*Thread Reply:* By that logic I should (strongly) decrease the learning rate at the 2nd stage of learning to not hop out of any minimum I just found in the first stage of learning. I’m still playing with the right parameters for learning rate decay.

John Brandt (John.Brandt@wri.org)
2020-07-27 13:02:17

*Thread Reply:* ya! do you use adabound?

John Brandt (John.Brandt@wri.org)
2020-07-27 13:02:31

*Thread Reply:* if you use adabound you could probably just keep the learning rate as it would have converged to SGD by then

John Brandt (John.Brandt@wri.org)
2020-07-27 13:02:43
Ben Weinstein (benweinstein2010@gmail.com)
2020-07-27 13:03:43

*Thread Reply:* no, i was using ADAM. looking now.

John Brandt (John.Brandt@wri.org)
2020-07-27 13:04:08

*Thread Reply:* basically just smoothly switches Adam to become SGD since SGD is known to generalize better than Adam but Adam is more stable early on

Ben Weinstein (benweinstein2010@gmail.com)
2020-07-31 17:26:26

Hi everyone, long set of posts. I’m giving a talk at the Florida Museum of Natural History on an overview for a Computer Vision for Ecology. September 18th, i’ll post zoom link. What are the largest obstacles to applying AI to your projects? How would they be solved? What are our grand challenges? Let’s start a discussion here. Here is my outline so far, all are welcome to comment there too. https://docs.google.com/document/d/1j79muPrdUxo6-GyYR4ML39apwfUqK5qRtxg6gg7sRkw/edit?usp=sharing

❤️ Sara Beery, Tee R., Lily Xu
👍 Talia Speaker, Ed Miller
Ben Weinstein (benweinstein2010@gmail.com)
2020-07-31 17:26:55

Why do we need a computer vision for ecology? • Fieldwork is expensive and laborious • Human presence reduces detection probabilities • Ecological models are data hungry (e.g N-mixture, Markov, bayesian latent states)

Ben Weinstein (benweinstein2010@gmail.com)
2020-07-31 17:29:15

What are the major obstacles? • Not enough data • Large barrier to entry • Cross disciplinary collaboration is difficult

Sara Beery (sbeery@caltech.edu)
2020-07-31 17:36:53

*Thread Reply:* Bias in the data we do have, natural world is long-tailed... both probably fall under "not enough data" one way or another

👍 David Healey
Ben Weinstein (benweinstein2010@gmail.com)
2020-07-31 17:37:53

*Thread Reply:* • added. “Ecology is often focused on rare classes with large geographic variation.”

👍 Sara Beery, Tee R.
Sara Beery (sbeery@caltech.edu)
2020-07-31 17:49:28

*Thread Reply:* In you data sharing section, it would be nice to talk about the opportunity to leverage bycatch when data is open

Ben Weinstein (benweinstein2010@gmail.com)
2020-07-31 17:51:18

*Thread Reply:* ‘A role for museums as curators of digital information: many studies are focused on a single set of species, but record many incidental sightings that can be used as training data.’

Sara Beery (sbeery@caltech.edu)
2020-07-31 17:51:56

*Thread Reply:* training data or just useful observations that have previously been left to rot on hard drives

👍 Ben Weinstein, Tee R.
Carly Batist (cbatist@gradcenter.cuny.edu)
2020-07-31 19:52:30

*Thread Reply:* I would add something to acknowledge the diff toolkits ecologists and computer scientists are trained to use (and therefore stick with). e.g., every intro biostats class I've heard of uses R, as does most of the ecology world, whereas computer scientists breathe Python. Not saying one is better than the other, but it's hard to get either side to invest in learning a new programming language. If you wanted to really get at the core of the problem, you'd likely have to get undergrad programs to change the number or structure of required courses.

👍 Ben Weinstein, Sara Beery, Tee R., Björn Lütjens
Ben Weinstein (benweinstein2010@gmail.com)
2020-07-31 19:54:14

*Thread Reply:* with a nod to RopenSci, reticulate and https://keras.rstudio.com/

👍 Carly Batist
Carly Batist (cbatist@gradcenter.cuny.edu)
2020-07-31 19:58:48

*Thread Reply:* And for data/model sharing, http://lila.science/ is a good example! Also Wildlife Insights, Xeno-Canto (which the warbleR package can pull directly from), MobySound/OrcaSound, the Macaulay library.

Ben Weinstein (benweinstein2010@gmail.com)
2020-07-31 19:59:41

*Thread Reply:* shows how much outreach is needed, I’m a contributor to several of those things, but never heard of the warbleR package.

Carly Batist (cbatist@gradcenter.cuny.edu)
2020-07-31 20:06:25

*Thread Reply:* Bioacousticians love that package. Also gibbonR, monitoR (all 3 build upon seewave and tuneR). And for web apps, ARBIMON/Sieve Analytics is good, but has a limit to storage/computing before you have to pay (they've also just merged with Rainforest Connection).

arbimon.sieve-analytics.com
Ben Koger (benkoger@gmail.com)
2020-08-01 14:47:49

*Thread Reply:* From a behavioral ecology point of view, people were traditionally forced to choose between working in artificial lab environments to get good high resolution behavioral data, or working in natural environments and making do with much coarser behavioral information. Computer vision finally lets us get high quality behavioral datasets in truly natural environments.

Ben Weinstein (benweinstein2010@gmail.com)
2020-08-01 18:05:10

*Thread Reply:* What is the next step for behavioral ecology in natural settings using these tools. What is the grand goal/obstacle?

Amrita Gupta (agupta375@gatech.edu)
2020-08-04 12:45:19

*Thread Reply:* RE: cross-disciplinary collaboration difficulties. As a CS PhD student explicitly working in the "AI for sustainability" space for the past several years, I think it is still challenging to initiate collaborations between CS researchers and ecologists. Attending events like NACCB or US-IALE to learn about domain challenges can help, but is outside of typical venues for which AI researchers can get support to attend (in line with what Sara was saying about what is considered "CS research"). Even when we do get to participate in these events, I believe whether or not a collaboration comes out of it still comes down to knowing an "advocate" who will connect people on both sides.

❤️ Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2020-07-31 17:29:51

• Better representation, how do we make AI more inclusive?

Ben Weinstein (benweinstein2010@gmail.com)
2020-07-31 17:47:31

*Thread Reply:* There is a nice debate here that parallels the discussion of double blind paper review versus active support for underrepresented groups that I think is really relevant to our community. https://www.nytimes.com/2020/07/16/arts/music/blind-auditions-orchestras-race.html?searchResultPosition=1

The New York Times
} By Anthony Tommasini
❤️ Sara Beery, Lily Xu
Carly Batist (cbatist@gradcenter.cuny.edu)
2020-07-31 19:40:45

*Thread Reply:* Just a small note-I would use the term 'global south' rather than 'developing countries,' just to move away from the colonial terminology.

👍 Ben Weinstein, Sara Beery, Tee R.
Carly Batist (cbatist@gradcenter.cuny.edu)
2020-07-31 19:43:55

*Thread Reply:* And I'd add something about making actually equitable collaborations between researchers from the global south and north. Westerners going down and telling them what to do rather than listening and working with them also reinforces this neocolonial thinking. We need to be building in-situ capacity and long-term infrastructure. As it is now, someone gets a grant and goes down for 6 months to collect data and then leaves, which is 1) exploitative (often field assistants and local collaborators/org's are not included in pubs or even told the results) and 2) not sustainable.

Ben Weinstein (benweinstein2010@gmail.com)
2020-07-31 19:47:32

*Thread Reply:* What are the measures that would help actualize this? I gave a talk virtually in Ecuador last week, and a lot of the discussion was focused on hardware resources, that just having access to the correct kinds of computational resources would be a really large step forward for a lot of field teams that I’ve been working with in Ecuador. I think there’s a trade off in terms of how much development should be focused on the cloud, which helps reduce the barrier to entry, since less coding is needed, versus getting data into the cloud, which can be really hard for my collaborators in Latin America.

Ben Weinstein (benweinstein2010@gmail.com)
2020-07-31 19:52:34

*Thread Reply:* @Lily Xu any thoughts here from some of your recent organizing.

Carly Batist (cbatist@gradcenter.cuny.edu)
2020-07-31 20:13:39

*Thread Reply:* Yes the hardware is definitely big, particularly because it facilitates in-country analyses rather than just data collection. But obsolescence is also an issue, especially with how fast tech is moving now. I work in Madagascar and the computers the research station got are really old/slow/broken/etc now and shipping is basically non-existent so it's a matter of someone flying new stuff in. I'd also say that whichever side of the trade-off you want, equitable partnerships are still integral and unfortunately in many places, not equitable at all.

👍 Lily Xu
Carly Batist (cbatist@gradcenter.cuny.edu)
2020-07-31 20:14:29

*Thread Reply:* I by no means am trying to suggest I have answers to any of this! Just giving my experience being on the ecology side of things in the name of cross disciplinary collaboration! 🙂

👍 Ben Weinstein, Sara Beery
Lily Xu (lily_xu@g.harvard.edu)
2020-08-03 14:05:15

*Thread Reply:* Hi Ben, just seeing this discussion now! I'm excited for your talk in September.

That's a great point connecting with the orchestras and faults of blind auditions/reviewing — many arguments for affirmative action apply here, too. I've done some reviewing for workshops recently (AI for Good and Mechanism Design for Social Good) which aren't blind, and during reviewer discussions I see people being a lot more mindful to authors and institutions from historically underrepresented areas, including Latin America.

In some ways, I think virtual conferences/events definitely help serve as an equalizer: conferences that formerly cost $2000+ to attend now cost $5–10. (Ariel Procaccia wrote an op-ed about the success of virtual EC)

I also think we need to, as Sara and others have mentioned here, reconsider what are "valuable" contributions to AI. For example, producing NLP datasets that are in languages other than English and other projects that help combat inequality throughout the AI pipeline, from data gathering to labeling to designing algorithms.

Bloomberg.com
Ben Weinstein (benweinstein2010@gmail.com)
2020-07-31 17:32:20

i’ll start calling on people in the next few days with specific questions.

Tee R. (taylornroberts2009@gmail.com)
2020-07-31 22:09:35

I think the large barrier to entry is an interesting point, I am not sure if this is quite what you're looking for but knowledge, either in AI or the topic of the studies can be a bit of a barrier. It can be overwhelming for someone approaching from the outside, either as a newbie or from a more technical background. I think looking at that might also encourage more grass roots help in some of the data tasks that consume so much time and effort.

Tee R. (taylornroberts2009@gmail.com)
2020-07-31 22:10:44

This may be somewhat more centered around the US experience, though I think I have seen some more grass roots connections in other places regarding ecology and natural studies...I don't know.

👍 Ben Weinstein, Sara Beery
Bistra Dilkina (dilkina@usc.edu)
2020-07-31 22:18:13

@Ben Weinstein this is a great topic. Re: inclusion/diversity in AI, I would say that interdisciplinary research projects such as AI+Ecology can play a key role in improving diversity in AI, and also in installing more inclusivity, awareness and open-mindedness in AI researchers (like similar combinations of AI with social and life sciences) . BUT it is important to discuss the barriers to entry for AI/CS people to this subfield, as well as the barriers for ecologists to it. I often get approached by CS students with passion for the environment, and environment eng/ecology students with interest in computational methods, but there are no clear resources on how to successfully build a research or professional path

👆 Tee R., Sara Beery, Carly Batist, Elizabeth Bondi, Lily Xu, Björn Lütjens
Ben Weinstein (benweinstein2010@gmail.com)
2020-07-31 22:35:34

*Thread Reply:* Thanks @Bistra Dilkina for these thoughts, we often see this just from the ecology side. Also, i’m hoping to point to some of the park ranger optimization work under the heading of “Integrating measures of uncertainty in machine learning and ecology”.

Bistra Dilkina (dilkina@usc.edu)
2020-07-31 22:38:00

*Thread Reply:* @Ben Weinstein are you going to also talk about remote sensing AI for ecology?

Ben Weinstein (benweinstein2010@gmail.com)
2020-07-31 22:38:52

*Thread Reply:* thats where most of my research is, so just as an example of how we can use weak supervision on unlabeled data to tackle the ‘not enough data’ obstacle.

Sara Beery (sbeery@caltech.edu)
2020-08-01 01:03:16

*Thread Reply:* As someone who is fighting to figure out how to keep doing this as a career, I can really second the lack of clear resources. Another issue, from the academic CS side, is the balancing act between impactful for conservation and publishable in CS that any CS grad student in this space has to consider when deciding what projects to focus on.

👍 Lily Xu, Siyu Yang, Ankita Shukla, Björn Lütjens
Tee R. (taylornroberts2009@gmail.com)
2020-07-31 22:22:32

I 100% agree with this, this is where I'm at. Learning the tech side but wanting to help out on the eco side too...

❤️ Sara Beery, Lily Xu
Tee R. (taylornroberts2009@gmail.com)
2020-08-02 18:35:45

You guys, I made a random viz for anyone who wants to see: https://public.tableau.com/profile/tee8225#!/vizhome/TexasBirdsProject/TexasBirdBiodiversity?publish=yes I was using the eBird data to do a few visualization projects. One fun thing I did learn is that blue herons have a far wider range than I thought. I just moved to Texas so may use this data to see if I can catch sight of a few of them, haha! 🙂

public.tableau.com
🐦 Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2020-08-03 12:12:43

*Thread Reply:* very cool. Did you download the csv or is live updating through the app?

👍 Tee R.
Tee R. (taylornroberts2009@gmail.com)
2020-08-03 20:22:50

*Thread Reply:* I downloaded the file

karen bakker (karen.bakker@ubc.ca)
2020-08-07 12:08:57

Hi everyone - just wanted to introduce myself as a new member. I'm a Professor at the University of British Columbia, and I've been working on digital tech and environmental governance innovation for the past several years (my original training was in physics and environmental sciences). I'm currently working on a book on this topic to be published with Princeton University Press next year, and I also maintain a database of innovative digital conservation technologies (3,000+ and growing fast). Some of you might be interested in this meta-review of the academic literature on digital tech/environmental governance: Bakker, Karen, and Max Ritts. "Smart Earth: A meta-review and implications for environmental governance." Global environmental change 52 (2018): 201-211. https://www.sciencedirect.com/science/article/pii/S0959378017313730

sciencedirect.com
❤️ Sara Beery, Carly Batist, Tee R., Siyu Yang, Riccardo de Lutio, Elizabeth Bondi, David
👍 Lily Xu, Megan Cromp
Carly Batist (cbatist@gradcenter.cuny.edu)
2020-08-07 13:58:15

*Thread Reply:* Awesome compilation!! Can't believe I didn't catch this paper when it first came out, thanks for sharing here!

🙂 karen bakker
karen bakker (karen.bakker@ubc.ca)
2020-08-07 12:14:15

I also collaborate with the United Nations Environment Program on their global initiatives on digital tech and environmental data. Here's a recent article I co-authored with David Jensen (UNEP) that outlines some of the ideas that are being debated by international organizations: https://medium.com/@davidedjensen_99356/digital-planet-20-priorities-3778bf1dbc27

Medium
Reading time
29 min read
🙌 Sara Beery, Siyu Yang, Elizabeth Bondi
👍 Megan Cromp, Bistra Dilkina
Chris Yeh (chrisyeh96@gmail.com)
2020-08-10 14:37:56

Passing along a conference opportunity from a different Slack group:

We're happy to announce that CompSust DC 2020 has just opened up submissions! This year's DC will be held October 17-18, 2020 virtually. More info and application in the post below, and online: http://www.compsust.net/compsust-2020/

👏 Sara Beery, Lily Xu, Oisin Mac Aodha, Björn Lütjens
Lily Xu (lily_xu@g.harvard.edu)
2020-08-10 14:44:56

*Thread Reply:* woohoo! 🙂

😍 Chris Yeh
Chris Yeh (chrisyeh96@gmail.com)
2020-08-10 14:38:58
Tony Chang (tony@csp-inc.org)
2020-08-11 15:15:49

Dear AI for Conservation Group!

For the past eight years, Conservation Science Partners has been a leader in the fields of conservation biology and landscape ecology through our use of advanced geospatial, remote sensing, and ecological modeling approaches. In recent years, the intersection of environmental sciences and computer sciences has become more apparent, and CSP is now seeking to integrate talented and diverse individuals with skills in data sciences to solve some of the grand challenges in conservation. As we launch a new data science and analytics initiative, we are excited to announce two new positions: Front End Developer and Data Engineer. Please see the attached hiring announcements and circulate among your network. Also, if there are any questions regarding the position, please feel free to contact me at tony@csp-inc.org

Sincerely, Tony Chang

👍 gvanhorn, Sara Beery, Carly Batist, Talia Speaker, Björn Lütjens, Amrita Gupta
Carly Batist (cbatist@gradcenter.cuny.edu)
2020-08-11 15:20:09

*Thread Reply:* You should also post to Wildlabs if you haven't already!

Wildlabs.net
👍 Talia Speaker, Björn Lütjens
Bistra Dilkina (dilkina@usc.edu)
2020-08-11 15:59:39

*Thread Reply:* @Amrita Gupta

Tony Chang (tony@csp-inc.org)
2020-08-11 16:04:24

*Thread Reply:* Thanks @Carly Batist I'll check it out!

Lily Xu (lily_xu@g.harvard.edu)
2020-08-13 18:01:20

The MD4SG working group on environment explores environmental challenges through the lens of computation and economics, with a focus on issues that disproportionally impact already-vulnerable populations. Our biweekly group this fall will focus on land use, climate, and energy. 🌲🌍☀️

Fill out this short survey by August 21 6pm ET if you're interested in joining our group.

Wanyi Li and I began this group in the spring and are excited for the upcoming semester!

😍 Sara Beery, Elizabeth Bondi, Siyu Yang
Björn Lütjens (bjoern.luetjens@gmail.com)
2020-08-19 18:19:00

The final list of all NeurIPS 2020 workshops is now available at: https://nips.cc/Conferences/2020/Schedule?type=Workshop Some relevant ones for this group might be: AI for Earth Sciences and Tackling Climate Change with ML.

nips.cc
👏 Sara Beery, David
David Rolnick (dsrolnick@gmail.com)
2020-08-20 07:20:09

*Thread Reply:* Weighing in from the climate change workshop, we'd love to have submissions from this group! If you do submit, remember to make explicit how your submission is relevant to climate change. Not every conservation-related problem is a climate change problem, but there can be overlap - for example if monitoring populations that are specifically under threat from climate change.

More info about the workshop here: https://www.climatechange.ai/events/neurips2020

🌍 David, Björn Lütjens
Mikey Tabak (tabakma@gmail.com)
2020-08-20 06:40:46

How do you computer vision folks keep up with the latest and most effective methods? I'm finding that there's so much new stuff published in so many places that it's hard to know which approach to use. (My background is in ecology and the field doesn't move quite as quickly, and everything is published in the peer-reviewed literature.) I'm trying to do object segmentation on really small objects (animal carcasses) from really high up (175 m drone). I've tried a few methods, but I keep finding newer stuff. I also keep maxing out my insufficient server, so I don't have unlimited computing power to try new things currently.

Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2020-08-20 09:39:44

*Thread Reply:* There’s two different kinds of recommendations I can give here. The problem of keeping up with the latest publications can be solved easily in computer vision by checking the following resources: • arXiv: https://arxiv.org/list/cs.CV/recent • CVF (all CV conferences): http://openaccess.thecvf.com/ As for picking the right pipeline: in all honesty, I deliberately don’t use the latest method. Oftentimes, they are not widely spread, very exhaustive to implement, and just an incremental improvement over existing methods. Instead, I tend to stick to the models that have been shown to be good. They may not yield the very best performance, but they are solid and can be adapted to one’s needs. For me, these include: • Image classification: I always use ResNet, even though there’s plenty of newer, more powerful methods. • Segmentation: I am a big fan of U-Net; it is easy to understand and performs like a champ. DeepLab is also great, though. • Object detection: I have been using RetinaNet and YOLO, if needed. Most of the time I could do away with bounding boxes and just predict points, in which case I use a ResNet without the average pooling layer. Honestly, this is what I would recommend for you. I may be biased, but I have been working with tiny objects from drones, and this outperforms other models in speed, accuracy and simplicity.

👍 Mikey Tabak, Björn Lütjens, Sara Beery, Siyu Yang
Ben Weinstein (benweinstein2010@gmail.com)
2020-08-20 10:06:20

*Thread Reply:* I agree with this, I also think that its worth directing your focus away from architectures and more towards the entire pipeline. In ecology we almost always have limited data, getting familiar with fine-tuning strategies, transfer learning strategies generating synthetic data and understanding model parameters such as loss weights and learning rates often makes a HUGE difference as compared to the actual architecture.

👍 Mikey Tabak, Björn Lütjens, Sara Beery, Siyu Yang
Mikey Tabak (tabakma@gmail.com)
2020-08-20 10:06:44

*Thread Reply:* @Benjamin Kellenberger Thank you so much for your guidance! This is really helpful. For this project I have been using mostly DeepLabV3 and I was next going to try U-Net. I had not thought about using just points for these little carcasses instead of segmentation, but I'll give this a shot. Also thanks for providing these links! I'll keep an eye on them to stay current on the latest methods. I really appreciate you sharing your wisdom here.

👍 Benjamin Kellenberger
Mikey Tabak (tabakma@gmail.com)
2020-08-20 10:14:24

*Thread Reply:* Thank you @Ben Weinstein! (I was actually just reading your recent paper in Ecological Informatics this morning.) I have been doing a lot of fine tuning with adjusting hyperparameters and transfer learning. I currently cannot use an HPC, so adjusting parameters and then re-training a model takes a really long time on the small server we have. Do you have readings you recommend for strategies to adjust the pipeline? I've worked through some examples in the past, but I'm certainly not an expert and a lot of my adjustments are arbitrary decisions that I make in the process.

Ben Weinstein (benweinstein2010@gmail.com)
2020-08-20 11:55:28

*Thread Reply:* usually testing a small portion of your model is good practice, it has the added benefit of revealing any overfitting when you go from your small dataset to the full dataset.

👍 Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2020-08-20 11:55:44

*Thread Reply:* Keras has a few optomizers, but I also use the ones from comet.ml

Ben Weinstein (benweinstein2010@gmail.com)
2020-08-20 11:56:02

*Thread Reply:* that helps with parameter searches.

Ben Weinstein (benweinstein2010@gmail.com)
2020-08-20 11:56:56

*Thread Reply:* more broadly the focus on data generation is so important. I’ve seen transfer learning among very disparate classes work well, we used that tree model to start and trained for drone work on birds and it was hugely beneficial.

👍 Mikey Tabak
Ben Weinstein (benweinstein2010@gmail.com)
2020-08-20 11:57:26

*Thread Reply:* there is only so much information you can squeeze out of small ecological datasets.

Ben Weinstein (benweinstein2010@gmail.com)
2020-08-20 11:58:29

*Thread Reply:* I think the camera trap competitions that @Sara Beery ran also showed that grabbing imagery from other locations can be useful, even in the extreme situation when you don’t have real information for a target class (see their kaggle competition going from CA to Idaho)

👍 Sara Beery, Mikey Tabak
Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2020-08-20 12:01:56

*Thread Reply:* For parameter optimisation, I second @Ben Weinstein — trying on a decently sized subset usually reveals enough as to what settings one needs.

👍 Mikey Tabak
Sara Beery (sbeery@caltech.edu)
2020-08-20 12:04:33

*Thread Reply:* Agreed with all of the above. I think it's good to start with something robust that has been proven out over time, and make sure you've really maxed that out by exploring data augmentation, hyperparameters, and cotraining or pretraining from other relevant public datasets. Once you have an idea of what you can do with a well-proven model it makes it easier to determine whether a newfangled method is worth trying to incorporate. I completely agree with @Benjamin Kellenberger that frequently the newest methods can be unreasonably complicated to implement and finicky to train for very small gain.

👍 Mikey Tabak
Ben Weinstein (benweinstein2010@gmail.com)
2020-08-20 12:08:36

*Thread Reply:* besides our methods, right @Sara Beery

😅 Sara Beery, Mikey Tabak
Ben Weinstein (benweinstein2010@gmail.com)
2020-08-20 12:08:56

*Thread Reply:* those are always worth adopting.

Ben Weinstein (benweinstein2010@gmail.com)
2020-08-20 18:22:27

*Thread Reply:* a follow up here from today. Forgot to apply per image normalization and centering. Added per image and performance increased 20%. Alot more important than any architecture difference.

👍 Sara Beery, Benjamin Kellenberger, Mikey Tabak
Mikey Tabak (tabakma@gmail.com)
2020-09-03 13:32:25

*Thread Reply:* @Benjamin Kellenberger @Ben Weinstein @Sara Beery Thank you all so much for your guidance! I fiddled with hyperparameters, augmentation, optimizers, and loss functions and now I have good model (it's actually better than our technicians at finding objects. Now I'm getting started on another project and have another question, so I'll start a new thread.

🙌 Sara Beery
👍 Benjamin Kellenberger
Dan Morris (agentmorris@gmail.com)
2020-08-27 14:54:25

Two new individual ID datasets available on lila.science thanks to the good folks at Wild Me:

http://lila.science/datasets/great-zebra-giraffe-id http://lila.science/datasets/whale-shark-id

Everyone go identify some animals with AI! (Bonus: there are lots of fantastic baby zebra pictures.)

LILA BC
LILA BC
🎉 Sara Beery, Elijah Cole (Deactivated), Ben Weinstein, Oisin Mac Aodha, Tee R., Jonathan Granskog, Zac Winzurk, David Healey, gvanhorn, Björn Lütjens, Srishti, Omiros Pantazis
😍 Jon Van Oast, Chris Yeh, Riccardo Pressiani, Ankita Shukla
👍 Mikey Tabak, Siyu Yang
Oisin Mac Aodha (macaodha@caltech.edu)
2020-08-27 15:08:09

*Thread Reply:* Looks very cool! Would it be possible to have some example images on the landing page for each? Not necessarily hard examples, but just to see the variety. I like how @Sara Beery did it for her Caltech Camera Trap dataset: https://beerys.github.io/CaltechCameraTraps/

beerys.github.io
👍 Jon Van Oast
Dan Morris (agentmorris@gmail.com)
2020-08-27 15:40:04

*Thread Reply:* I do like that idea, in fact I like the idea of doing that across all LILA datasets, but then again I'm also lazy, so let's call it a crowdsourcing effort. 🙂 If anyone sends me a bunch of images from any dataset on LILA that demonstrate the breadth of the data in an interesting way, I'll find a way to either put them right on the main page for that dataset, or - more likely - host a separate HTML file somewhere that shows those images, and link to that from the main page.

🎉 Jon Van Oast, Oisin Mac Aodha, Sara Beery, Carly Batist
Matt Ziegler (mattzig@cs.washington.edu)
2020-08-31 03:28:39

Hi there! I've been lurking for a while but haven't introduced myself yet. I'm a PhD student in U Washington's ICTD Lab, mostly focused on technologies for the community/social/political aspects of conservation.

It's not really AI, but @Kennedy Murrithi, @William Njoroge and I have a new paper about basic mobile phone services for engaging with Ol Pejeta's surrounding villages: opening communication channels to improve service delivery and more fairly share the costs/benefits of conservation, and building social capacity to work together better when issues arise. (To toot my own horn, we won best paper at ACM Computing and Sustainable Societies this year!) Check it out:

The paper: https://mattziegler.net/papers/Ol-Pejeta-Phones-COMPASS2020.pdf 15-minute presentation: https://www.youtube.com/watch?v=hEUJq97n1-E 3-minute version: https://www.youtube.com/watch?v=gRryk_xUMkk

YouTube
} Matt Ziegler (https://www.youtube.com/channel/UCiiiyXNSDdl3-bNw7gT-QPQ)
👍 Oisin Mac Aodha, Lily Xu, Carly Batist, Sara Beery, David Healey, Jon Van Oast, Srishti, Mikey Tabak
Jon Van Oast (jon@wildme.org)
2020-09-01 12:18:16

looking forward to reading this paper and watching these videos. i was lucky enough to stay at ol pejeta for a bit, working with zebra and giraffe conservation (photo id software project we work on).

👍 Sara Beery, Matt Ziegler
Matt Ziegler (mattzig@cs.washington.edu)
2020-09-01 14:57:12

*Thread Reply:* Nice! I've definitely heard of you. Maybe catch you there again sometime!

🎉 Jon Van Oast
Mikey Tabak (tabakma@gmail.com)
2020-09-03 13:41:38

What software are folks using to annotate videos for training computer vision models? It looks like there are some options out there, but it's hard to tell what will work well for this application. I'm planning to do segmentation (or bounding box) on the videos and for one project I'll be trying to find volant animals at 175 meters above the camera, so the technicians will need to be able to zoom in. @Benjamin Kellenberger developed the great AIDE software for still images, and I'm wondering if there is anything of similar quality for videos.

Sara Beery (sbeery@caltech.edu)
2020-09-03 13:42:51

*Thread Reply:* I recently annotated bounding boxes in video using vatic, which is open source, but I'm not sure if it supports zooming. http://www.cs.columbia.edu/~vondrick/vatic/

👍 Mikey Tabak, Jonathan Granskog
➕ Srishti
Sara Beery (sbeery@caltech.edu)
2020-09-03 13:46:02

*Thread Reply:* I just hosted it locally, and didn't do a large amount of data.

Ben Weinstein (benweinstein2010@gmail.com)
2020-09-03 13:46:10

*Thread Reply:* i heavily use rectlabel for Mac

👍 Mikey Tabak, Zac Winzurk, Srishti, Jonathan Granskog, Sara Beery
Ed Miller (ed@hypraptive.com)
2020-09-03 21:55:45

*Thread Reply:* Anyone have experience with VoTT? Is this what is used for the Azure labeling tool?

GitHub
👍 Mikey Tabak, Mari Reeves
Utkarsh Goel (ugoel@connect.hku.hk)
2020-09-04 07:15:15

*Thread Reply:* There’s an open source project called labelme, and I’ve used it for a project earlier. Does video annotation well, but I will have to check if it has a feature for zooming in.

https://github.com/wkentaro/labelme

GitHub
👍 Mikey Tabak
Ben Koger (benkoger@gmail.com)
2020-09-14 05:13:56

*Thread Reply:* I've started using CVAT in the past few months and have really liked it. Developed by intel but open source has nice interface and supports most standard types of annotation on images and videos. Used to be a little unstable but much better now. Also easy to run on a server so many people can annotate same project. Allows zooming. https://github.com/openvinotoolkit/cvat

GitHub
Mari Reeves (mari_reeves@fws.gov)
2020-12-11 18:13:57

*Thread Reply:* After looking into this question, I ended up using VoTT for ease of use and the ability to export records to different formats.

👍 Sara Beery, Mikey Tabak
Ben Weinstein (benweinstein2010@gmail.com)
2020-09-16 11:18:31

I’m seeing alot more people here in general than in the jobs channel. See that channel for a new position in Switzerland that my collaborators are hiring for deep learning for ecological object detection.

👍 Sara Beery, gvanhorn, Mikey Tabak, Mike C
Ben Weinstein (benweinstein2010@gmail.com)
2020-09-18 13:41:58

I’m giving a talk on “A Computer Vision for Ecology” at Florida Museum National History (+ iDigBio) at 3pm ET today. Zoom webinar: https://ufl.zoom.us/j/98489487554?pwd=dW1ULzZyTnFoQVYrczJja1JKMDV3dz09  Meeting ID: 984 8948 7554  Password: 671049. I was asked to do a wide ranging talk, so @Siyu Yang @Sara Beery @gvanhorn, @Holger Klinck and others get a shoutout for recent work. Thanks everyone who helped with the slides and address those big questions. Slides are here if interested. https://www.dropbox.com/t/GqyTNrallTfy7G9v

Dropbox
👍 Oisin Mac Aodha, Sara Beery, Elijah Cole (Deactivated), Dan Morris
Ben Weinstein (benweinstein2010@gmail.com)
2020-09-25 16:41:17

*Thread Reply:* If anyone actually wanted to see my talk on computer vision for ecology. Here is the recording https://ufl.zoom.us/rec/share/OCn_21P2mqZa2jAqy3xNJ8xXRbqgXuArqekIssggyTsn2Zvn5ccv9z1QrifnamkM.eqhoPnR_XS57QOMJ

🌳 Sara Beery
Holger Klinck (hk829@cornell.edu)
2020-09-18 13:42:31

Cool. Thanks Ben!

Ben Weinstein (benweinstein2010@gmail.com)
2020-09-18 13:44:41

*Thread Reply:* Definitely. Do you happen have a slide that shows/plays any audio and birdnet predicts it? I just show a screenshot and describe it.

Holger Klinck (hk829@cornell.edu)
2020-09-18 13:49:14

*Thread Reply:*

Holger Klinck (hk829@cornell.edu)
2020-09-18 13:49:30

*Thread Reply:* Feel free to modify as you see fit!

👍 Ben Weinstein
Sara Beery (sbeery@caltech.edu)
2020-09-24 16:52:38

This is cool, lots of tropical forest data about to become public: https://www.planet.com/pulse/planet-ksat-and-airbus-awarded-first-ever-global-contract-to-combat-deforestation/?fbclid=IwAR3BwzSlsuKrFZbdDwJ3t68nRxX6fibSjFVaW7QWUsc39wNGx9_kLwVwJDc

planet.com
🎉 Jon Van Oast, Lily Xu, David, Mike C
👍 Srishti, Ankita Shukla, David, Siyu Yang
Ben Weinstein (benweinstein2010@gmail.com)
2020-09-24 16:56:01

*Thread Reply:* I had a meeting with Global Forest Watch not so long ago and didn’t hear about this @John Brandt will you at WRI have access to these data? Global Forest Watch is explicitely mentioned in the announcement. Will there be a web portal or an agreement with WRI? What can we expect in terms of data access for the scientific community? Like if, for example, wanted to host a server performing deepforest predictions for ecuador in realtime.

😍 Sara Beery
Sara Beery (sbeery@caltech.edu)
2020-09-24 16:59:15

*Thread Reply:* @Ben Weinstein that sounds amazing

John Brandt (John.Brandt@wri.org)
2020-09-25 09:15:07

*Thread Reply:* We have a call with Planet and NICFI this afternoon to discuss the terms of the data release. We know the contract is for 2 years, with an option of extension up to 4 years, and we know that data will be released one month after the mosaic period (so November data will be released in December). We don't yet know what the process will be like for getting access to the data - we believe it will be locked behind some sort of application process, but it will be free for non profits / academics to use if they go thru the process. I don't know when they are releasing this to everyone, though I know it is planned. For the moment they're working with us at WRI and people at FAO to user test the data access

Ben Weinstein (benweinstein2010@gmail.com)
2020-09-25 10:05:17

*Thread Reply:* thanks. Keep up informed. Let me know if I can be of use.

Sara Beery (sbeery@caltech.edu)
2020-09-25 13:08:47

Deadline to apply for this year's Geo for Good Workshop is tonight! https://sites.google.com/corp/earthoutreach.org/geoforgood20/home

👍 Jon Van Oast, Benjamin Hoffman, Srishti
Sara Beery (sbeery@caltech.edu)
2020-10-01 21:59:03

We are putting together a proposal for the Fine-Grained Visual Categorization (FGVC) workshop at CVPR 2021, with a deadline of Oct. 16. (See https://ai.googleblog.com/2020/05/announcing-7th-fine-grained-visual.html for a recap of the most recent FGVC.) We’re looking for a o-organizer for the 3rd edition of the Herbarium challenge, with data provided by New York Botanical Garden. Kiat Chuan Tan served in this role the past two years, and kindly offered to give pointers to/share his codebase with the next co-organizer.

If this might be of interest to you, drop me a line and I’d be happy to tell you more about what it entails!

Google AI Blog
🎉 Jon Van Oast, Oisin Mac Aodha
👍 Srishti, Subhransu Maji
Carly Batist (cbatist@gradcenter.cuny.edu)
2020-10-02 09:08:28

Rainforest Connection has updated the ARBIMON platform (releasing Oct 15), now with UNLIMITED free (!!!!) storage and access to their template matching model (they're testing out random forest and neural network functionality as well). Join the listserv if you're interested in hearing about more updates.

👍 gvanhorn, Sara Beery, Sam Kelly
gvanhorn (grv22@cornell.edu)
2020-10-02 09:11:26

*Thread Reply:* very cool!

Ben Weinstein (benweinstein2010@gmail.com)
2020-10-03 23:09:30

I’m looking for papers that do multi-sensor ensemble learning. No majority voting or regression, but actually using the features generated from two data streams to make a jointly learned prediction. I’m working on RGB + Hyperspectral tree species prediction and cannot find an ensemble that outperforms hyperspectral alone. I am currently trying to concat the dense features before the softmax layer and then learn a new joint representation. No luck. Maybe @Patrick Gray @Hannah Kerner?

👍 Sara Beery
David Healey (david.w.healey@gmail.com)
2020-10-03 23:25:19

*Thread Reply:* Do the separate RGB and hyperspectral models have a lot of overlap in their errors on test sets? Are the hyperspectral errors basically a subset of the RGB errors?

Ben Weinstein (benweinstein2010@gmail.com)
2020-10-03 23:26:40

*Thread Reply:* not yet. We have a third set of validation data. First trying on the same holdout set used to train each member of the ensemble. It should ATLEAST overfit and improve, even if we later find out it doesn’t generalize to a new test set.

Ben Weinstein (benweinstein2010@gmail.com)
2020-10-03 23:27:39

*Thread Reply:* I’ve tried for example, batchnorm before concat because maybe the networks have learned features of different magnitudes.

Ben Weinstein (benweinstein2010@gmail.com)
2020-10-03 23:28:11

*Thread Reply:* i’ve tried multiplying them instead of concat.

Ben Weinstein (benweinstein2010@gmail.com)
2020-10-03 23:29:11

*Thread Reply:* https://github.com/weecology/DeepTreeAttention/blob/9b48d277df8024286a41e86100d6ad5e180ca156/DeepTreeAttention/models/Hang2020_geographic.py#L105 to make it literal.

GitHub
Ben Weinstein (benweinstein2010@gmail.com)
2020-10-03 23:29:49

*Thread Reply:* i’ve tried freezing the layers versus finetuning with the new joint loss.

David Healey (david.w.healey@gmail.com)
2020-10-03 23:30:22

*Thread Reply:* Yeah you're right to just be looking at the holdout sets first that you used to train them both. My question is: with that holdout set, are the sets of examples that the individual models get wrong subsets of each other?

David Healey (david.w.healey@gmail.com)
2020-10-03 23:31:35

*Thread Reply:* are there a large number of specific examples that RGB gets right than hyperspectral gets wrong?

Ben Weinstein (benweinstein2010@gmail.com)
2020-10-03 23:31:43

*Thread Reply:* don’t know yet, i’ll look, i’m sure they aren’t perfectly nested, but I hear you. RGB data (3 channels) + infrared (369 channels), so atleast they should be reasonably diverse.

Ben Weinstein (benweinstein2010@gmail.com)
2020-10-03 23:32:43

*Thread Reply:* its funny because I assumed I’d get a modest improvement and the real goal was to study joint connections at differing layers so that the two data sources could teach each other features. I haven’t anticipated this going so badly.

David Healey (david.w.healey@gmail.com)
2020-10-03 23:32:45

*Thread Reply:* I've had this problem before in other contexts and it just turned out that they were not orthogonal enough

👍 Ben Weinstein, Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2020-10-03 23:32:58

*Thread Reply:* fascinating.

David Healey (david.w.healey@gmail.com)
2020-10-03 23:35:03

*Thread Reply:* Another thing you can do is basically freeze both pretrained models, and initialize the final weights on the hyperspectral half of the concat to exactly what it is on the pretrained model, and the RGB half to 0. So you're literally starting with exactly the same individual hyperspectral model

👍 Ben Weinstein
David Healey (david.w.healey@gmail.com)
2020-10-03 23:36:06

*Thread Reply:* Then train the last layer and see if it improves at all in the first few batches

David Healey (david.w.healey@gmail.com)
2020-10-03 23:38:54

*Thread Reply:* Another thing might be to see the relationship between the number of channels you include in the hyperspectral model and the performance. With hyperspectral imaging its always possible there is so much overlap that having more than a couple dozen channels basically doesn't add anything new, and that could extend to RGB also.

Bistra Dilkina (dilkina@usc.edu)
2020-10-16 03:52:32

*Thread Reply:* @Caleb Robinson

Mari Reeves (mari_reeves@fws.gov)
2020-12-11 18:18:17

*Thread Reply:* I was doing a wetlands mapping project a few years ago where I took a different approach to this question. I had access to 8 band orthophotos, but also to several GIS layers, which had information about soils, vegetation type, and land cover classification. I ended up using more standard machine learning models (Random Forests, Boosted Regression, MARS) and created a training set of points across the landscape, which I classified as "wet" or "not wet" in roughly equal proportions. Then I used these points to "drill down" through all the different layers and ran those as predictors for my model. The hyperspectral layers were still the most informative predictors for water, and if you were going to use something like "soil type" as a predictor, then you need to make sure you fit the model using all classes you want to predict back to (otherwise the model chokes on new classes it hasn't seen yet when it goes to predict - why the numerical inputs are better than categorial) anyway, just figured I'd share, because the models actually did a really good job of predicting wetlands on the landscape ..even finding those covered with vegetation and these very small anchiline pools (few meters wide only). It was pretty compute intensive, especially when it came to the prediction step (for all Hawaiian islands on 5m grid) but certainly manageable with modern and cloud resources. The project is here, on github if you want to check it out. My code was pretty clunky back then, but you'll get the idea.

GitHub
Sara Beery (sbeery@caltech.edu)
2020-10-15 22:05:19

Awesome new Research Associate position opening at University of Minnesota at the intersection of computer vision and community science, collaborating with Wildlife Insights and Zooniverse! https://hr.myu.umn.edu/jobs/ext/337848

🎉 Jon Van Oast, Ben Weinstein, Lily Xu, Oisin Mac Aodha, Talia Speaker, Carly Batist, Björn Lütjens, Jason Holmberg (Wild Me)
Jon Van Oast (jon@wildme.org)
2020-10-15 22:14:45

whoa, cool.

Sara Beery (sbeery@caltech.edu)
2020-10-19 12:01:55

We've had a bunch of new people join in the last week! Want to introduce yourselves?

🎉 Jon Van Oast, gvanhorn
Ixchel Meza (ixchel.meza.ch@gmail.com)
2020-10-19 12:26:51

Hi, I’m Ixchel Meza. I’ve worked with satellite imagery to generate land cover and land cover change. Now I am working with camera trap images for classification and next I also will join the team that works with audio. I work at CONABIO (Mexico). 👋:skintone4:

🌍 Sara Beery, Oisin Mac Aodha, Omiros Pantazis, Björn Lütjens
🌿 Sara Beery, Lily Xu
👋 Jon Van Oast, Sara Beery, gvanhorn, Siyu Yang, Jonathan Granskog, Mikey Tabak
Ben Weinstein (benweinstein2010@gmail.com)
2020-10-19 12:41:29

*Thread Reply:* Welcome! What lab at CONABIO works on these issues? I work with researchers throughout latin america, mostly ecuador and colombia, and we don’t have any partners in mexico yet.

Ixchel Meza (ixchel.meza.ch@gmail.com)
2020-10-19 12:46:56

*Thread Reply:* hi, thanks!. I work at the Dirección General de Proyectos Interinstitucionales, where do you work?

Ben Weinstein (benweinstein2010@gmail.com)
2020-10-19 12:50:28

*Thread Reply:* I’m at University of Florida, but work closely with non-profits on biodiversity surveys. https://deepforest.readthedocs.io/, http://benweinstein.weebly.com/deepmeerkat.html. I’d be interested in hearing more about what the status of these kind of work is in mexico. The audio work is for bird detection?

Dr. Ben Weinstein
Bistra Dilkina (dilkina@usc.edu)
2020-10-19 12:52:21

*Thread Reply:* Hi, Ixchel. Welcome to this wonderful channel. I will also be very interested in learning more on work on these topics in Mexico, as my research group also works on similar topics at the USC Center for AI in Society.

Ixchel Meza (ixchel.meza.ch@gmail.com)
2020-10-19 14:03:59

*Thread Reply:* We have a project called Sipecam that involves monitoring techniques to evaluate the effect of defaunation. If you want, we could have a meeting to talk about it a little bit more with other part of the team. https://sipecamdata.conabio.gob.mx/

Cindy Vargas (cvarga16@asu.edu)
2020-10-19 21:13:22

Hi, my name is Cindy Vargas! I am a first-year PhD student at Arizona State University working on marine conservation research. My research is going to focus on how we can develop and test a camera monitoring system that uses machine learning to assess fishing catch composition in small-scale fishers in Baja California Sur, Mexico.

🐟 Sara Beery, Oisin Mac Aodha, Omiros Pantazis, Lily Xu
😎 Jon Van Oast, Stefan Schneider, Mikey Tabak
Omiros Pantazis (omiros.pantazis.16@ucl.ac.uk)
2020-10-20 06:43:57

Hi all, just realised I have been hanging around without having introduced myself! I am Omi, a 1st year PhD student at UCL, working on Biodiversity monitoring with Deep learning. I am also part of the Biome Health Project (https://www.biomehealthproject.com/) that tries to figure out the effects anthropogenic pressure has on biodiversity. Currently working on data coming from camera traps but planning to work with passive acoustic monitoring as well.

👋 Oisin Mac Aodha, Sara Beery, Lily Xu, gvanhorn, Björn Lütjens, Mikey Tabak
😎 Jon Van Oast
Clara Panchaud (clarapasu@gmail.com)
2020-10-29 09:16:48

Hi, my name is Clara Panchaud, also new here! I am a master student at NTNU in Norway. I study statistics and I am interested in applications in ecology and conservation. Right now I am working on my master thesis on step selection functions and looking for a PhD for next year 😁

👋 gvanhorn, Benjamin Kellenberger, Sara Beery, Alex Borowicz, Jonathan Granskog, Lily Xu, Omiros Pantazis, Björn Lütjens, Mikey Tabak
Ben Weinstein (benweinstein2010@gmail.com)
2020-10-30 12:22:18

Any suggestions for hyperspectral data augmentations and class imbalance? @Patrick Gray what did you use for your land cover work? I am randomly flipping and rotate 369 band inputs, oversampling the rarest classes to about 20% of the max class rate. I’ve tried equal oversampling/undersampling and weighting losses.

👍 Mikey Tabak
Sara Beery (sbeery@caltech.edu)
2020-10-30 12:55:37

*Thread Reply:* @Elijah Cole (Deactivated) do you have any tips here?

Bistra Dilkina (dilkina@usc.edu)
2020-10-30 14:17:52

*Thread Reply:* @Caleb Robinson

❤️ Sara Beery
Caleb Robinson (calebrob6@gmail.com)
2020-10-30 19:27:12

*Thread Reply:* I've never worked with hyperspectral imagery so this is total guess work...

Have you tried determining feature importance at the band level (then filtering out some bands) -- 369 dimensional inputs to a CNN seems too large. Are the spectral ranges of the bands adjacent to one another, maybe you could merge groups? Is spectral attention over input bands a thing @Elijah Cole (Deactivated)?

You could apply dropout over channels between input and first conv layer (Similar motivation to cutout augmentation, but channel wise)

Ben Weinstein (benweinstein2010@gmail.com)
2020-10-30 19:34:17

*Thread Reply:* thanks, following a recent spectral/spatial HSI paper on architecture. https://arxiv.org/abs/2005.11977 still trying to find the right place for class imbalance. I will be explore band reduction too. Trying to debate whether its worth going down of the road of synthesizing images using a GAN. https://arxiv.org/pdf/1903.05580.pdf

arXiv.org
👍 Caleb Robinson
Caleb Robinson (calebrob6@gmail.com)
2020-10-31 03:23:44

*Thread Reply:* Relevant (and very recent) paper that I just stumbled on https://arxiv.org/pdf/2010.12337.pdf

👍 Sara Beery
John Brandt (John.Brandt@wri.org)
2020-11-02 09:49:59

*Thread Reply:* I find that the effective number of samples works better than normal loss weighting methods: https://arxiv.org/pdf/1901.05555.pdf

👍 Sara Beery, Omiros Pantazis
Björn Lütjens (bjoern.luetjens@gmail.com)
2020-11-10 10:26:31

*Thread Reply:* Would it make sense to do gaussian blur augmentations, i.e. run a gaussian filter over the spatial domain? Thinking about this, there might be some sophisticated augmentation that simulates atmospheric noise? This wouldn't address class imbalance, but rather "uniform" data augmentation.

Björn Lütjens (bjoern.luetjens@gmail.com)
2020-11-10 10:37:17

*Thread Reply:* A friend just recommended ATRAN (https://atran.arc.nasa.gov/cgi-bin/atran/atran.cgi) to sim. atmospheric noise. Might be most relevant for satellite rather than aircraft data, depending on the height that you took your data.

👍 Sara Beery
Petar Gyurov (pgyurov93@gmail.com)
2020-11-10 12:23:03

Hey guys, I'm new here 👋 I've just started helping out the New Zealand DoC on a volunteer basis with some camera trap tooling. I'm a backend engineer by trade but have exposure to ML and data science. I studied theoretical physics at university - now I wish I had done computer vision instead! Anyway, I'm new to the conservation field and I was pleasantly surprised to see how active it is; I'm excited to contribute and learn! See you guys around.

🐅 Sara Beery, Oisin Mac Aodha, Lily Xu, Björn Lütjens
👍 Mikey Tabak
Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2020-11-10 12:32:50

*Thread Reply:* A warm welcome to the Kiwiland! Having a different background than CV is more helpful than it first seems—it’s the combination and transgression of disciplines that makes conservation fly in my opinion.

Ben Weinstein (benweinstein2010@gmail.com)
2020-11-10 12:36:08

*Thread Reply:* honestly what we need is more backend engineers. We’ve got great models. how to share them, make them reproducible, create competent UI interfaces for less experienced programmers. Good data storage and metadata management.

👍 Sara Beery, Benjamin Kellenberger, Björn Lütjens
Ben Weinstein (benweinstein2010@gmail.com)
2020-11-10 12:36:21

*Thread Reply:* Those are the things we need.

Sara Beery (sbeery@caltech.edu)
2020-11-10 12:36:38

*Thread Reply:* Completely agree

Ben Weinstein (benweinstein2010@gmail.com)
2020-11-10 12:38:23

*Thread Reply:* on my todo list is to have a look at https://www.biorxiv.org/content/10.1101/2020.10.02.323329v1 and see about all the above things

bioRxiv
👍 Sara Beery, Petar Gyurov
Petar Gyurov (pgyurov93@gmail.com)
2020-11-10 12:51:55

*Thread Reply:* @Ben Weinstein That's really encouraging to hear. I have plenty of experience with that kind of stuff and will be happy to help. I will read through the paper you sent over as it's actually something I have been thinking about already.

Mikey Tabak (tabakma@gmail.com)
2020-11-18 09:57:27

*Thread Reply:* Welcome @Petar Gyurov. This is a great place to learn from the experts. I'm an ecologist by training and I'm very fortunate to get great advice from these brilliant computer scientists.

Petar Gyurov (pgyurov93@gmail.com)
2020-11-18 10:21:25

*Thread Reply:* Thanks @Mikey Tabak I've been taking a look at some of your work (MLWIC) -- great stuff 👍

Björn Lütjens (bjoern.luetjens@gmail.com)
2020-11-10 13:41:58

Hello Everybody, Does somebody know of a pointer to a land cover/land use segmentation model in pytorch, that's ideally pretrained on pure-RGB NAIP imagery and/or Sentinel-2 imagery? Maybe @Ixchel Meza or @Patrick Gray? Thank you so much 🙂 🙂

Ixchel Meza (ixchel.meza.ch@gmail.com)
2020-11-10 14:19:22

*Thread Reply:* No, sorry. For segmentation we used a commercial product http://www.imageseg.com/

biscloud
👍 Björn Lütjens
Lily Xu (lily_xu@g.harvard.edu)
2020-11-10 15:15:06

*Thread Reply:* @Caleb Robinson has some good work on land cover mapping... his CVPR 2019 paper has released code. It's in tensorflow, not pytorch, but may still be useful!

paper: https://openaccess.thecvf.com/content_CVPR_2019/papers/Robinson_Large_Scale_High-Resolution_Land_Cover_Mapping_With_Multi-Resolution_Data_CVPR_2019_paper.pdf

code: https://github.com/calebrob6/land-cover

GitHub
👍 Björn Lütjens, Sara Beery, Mikey Tabak, Aaron Ferber
Björn Lütjens (bjoern.luetjens@gmail.com)
2020-11-10 17:27:18

*Thread Reply:* oh true! That repo is really amazing. Unfortunately, our whole code base is in pytorch and I'd need to access the weights of the model at different layers. So it seems quite tricky to integrate. But i'll definitely give it another look, thanks Lily!

Thanks for the link Ixchel! This might actually be interesting for another project/idea. Seems like I can upload an image to test the quality.

😊 Lily Xu, Sara Beery
☺️ Ixchel Meza
Patrick Gray (patrick.c.gray@duke.edu)
2020-11-17 10:09:37

*Thread Reply:* Hmm all my stuff is in Keras and nothing pretrained on RGB imagery. Though @Ben Weinstein may know of something good!

👍 Björn Lütjens
Caleb Robinson (calebrob6@gmail.com)
2020-11-17 14:40:11

*Thread Reply:* Thanks for the vote of confidence @Björn Lütjens! I'm actively working on a similar repo in PyTorch with which I'll train similar LC models -- I'll ping you in a few weeks when this is done.

🤗 Lily Xu
😄 Björn Lütjens
Björn Lütjens (bjoern.luetjens@gmail.com)
2020-11-18 14:37:58

*Thread Reply:* that sounds amazing! thank you so much @Caleb Robinson

Caleb Robinson (calebrob6@gmail.com)
2021-02-10 13:52:33

*Thread Reply:* @Björn Lütjens I just noticed this thread in my thread list -- forgot all about it, sorry! The repo in PyTorch is here https://github.com/calebrob6/dfc2021-msd-baseline. We use it for training baseline models for the data fusion contest, however it works "as is" for training with high-res labels too

Stars
<p>14</p>
Language
<p>Jupyter Notebook</p>
🎉 Lily Xu, Björn Lütjens
Björn Lütjens (bjoern.luetjens@gmail.com)
2021-02-11 09:29:47

*Thread Reply:* amazing; thank you so much Caleb!! :D:D

👍 Caleb Robinson
Mikey Tabak (tabakma@gmail.com)
2020-11-18 09:55:57

Hi folks, for those of you using aerial imagery to detect animals, have you ever combined overhead and oblique imagery in the same model? And if so, do you have any suggestions? I'll be working with several datasets to detect and count (and delineate species and sex for) a few bird species. My plan is to use object segmentation, but I'm not sure this will be enough to account for the fact that some images will be from overhead (drones pointed down) and some will be oblique (from out the window of a fixed wing aircraft). I don't have any of the images yet, just planning in advance. Thank you

Oisin Mac Aodha (macaodha@caltech.edu)
2020-11-18 09:58:38

*Thread Reply:* Perhaps of interest when using "top" and "side" views: https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Wegner_Cataloging_Public_Objects_CVPR_2016_paper.pdf

👍 Mikey Tabak, Sara Beery
Mikey Tabak (tabakma@gmail.com)
2020-11-18 10:03:21

*Thread Reply:* Thank you @Oisin Mac Aodha, I'm checking this out now

👍 Oisin Mac Aodha
Björn Lütjens (bjoern.luetjens@gmail.com)
2020-11-18 14:47:34

*Thread Reply:* How big are the birds in each image stream? If the size is very different it might be useful to have a pipeline of (1) two different bird detection algorithms for each image stream, (2) crop and resize, s.t., the birds in each small bird tile are same size, (3) use bird classification algorithm trained on both image streams; maybe even combined with ImageNet etc.

I'm totally not an expert here, so just brainstorming 🙂

👍 Mikey Tabak
Mikey Tabak (tabakma@gmail.com)
2020-11-18 15:21:45

*Thread Reply:* Thanks @Björn Lütjens. I'm not sure how large (how many pixels) the birds will be (I should be receiving some images soon). But another challenge is that of the overhead images, they will be taken from different elevations, so the birds will be of different size. I like the idea of having a preliminary detection model to pull out the bird, standardize the size, and then run a classification algorithm. My major concern with combining the two types of datasets, though, is that birds are going to look different from overhead vs from the side, so I might need a different classification model for each type of view also.

Björn Lütjens (bjoern.luetjens@gmail.com)
2020-11-18 15:25:16

*Thread Reply:* agreed! do you have sample images? 🙂 🐦

Mikey Tabak (tabakma@gmail.com)
2020-11-26 11:23:44

*Thread Reply:* Unfortunately no sample images yet. I'm also not sure I'll get to share them when I do

Sara Beery (sbeery@caltech.edu)
2020-11-23 10:35:42

Vaquita Hacks hackathon is looking for mentors!

"The Vaquita is a critically endangered species, and time is running out to save it. Join Earth HacksThe Conservation Project InternationalThe Countering Wildlife Trafficking InstituteEarth League International, and many more for Vaquita Hacks on Dec 12-13, the first hackathon dedicated to vaquita conservation! Projects from the hackathon will be going directly to organizations working on vaquita conservation."

• They are looking for mentors to help student participants in this program. Folks who are interested in mentoring should email Sanjana Paul <sanjana@earthhacks.io>. • Applications for student participation are open (now extended until November 23).

T-CPI
Mysite
Earth League International
👍 Mikey Tabak
Mikey Tabak (tabakma@gmail.com)
2020-11-26 11:42:15

Anyone have ideas for finding a small object in video that does not look distinct from the background in any one frame? I'm searching for video clips (around 3 seconds) that contain a bat. The only way for humans to tell that it is a bat is by looking at the whole video. Segmentation on images doesn't work because often the bat looks like a cloud in an image. Classification of videos doesn't work because the bats are so small and only in the video for a very short time (I also have a very small sample size). I've tried tracking but haven't had much success. I was thinking of combining a segmentation model (like DeepLabV3) with a video classification model (3D-CNN); although I'm not really sure what this looks like. I don't actually need segmentation, though, because I don't care where it is in the video, I really just want to classify the videos. Thanks for any ideas in advance and a Happy Thanksgiving to the folks in USA!

Elijah Cole (Deactivated) (ecole@caltech.edu)
2020-11-26 15:40:34

*Thread Reply:* @Sara Beery has some work that might be useful: https://arxiv.org/abs/1912.03538

arXiv.org
👍 Sara Beery, Mikey Tabak
Sara Beery (sbeery@caltech.edu)
2020-11-26 16:00:45

*Thread Reply:* Happy to chat and look at some data and see if something like our attention- based approach or a more traditional detection/tracking method might make sense!

👍 Mikey Tabak
Mikey Tabak (tabakma@gmail.com)
2020-11-27 06:07:22

*Thread Reply:* Thank you @Sara Beery and @Elijah Cole (Deactivated)! I hadn't thought about using Sara's Context R-CNN, but this might be the best strategy. I'm going to keep trying to develop a pipeline combining deeplab and a 3d-cnn as I described above, but I don't think this will work. I appreciate your willingness to chat Sara. I'll reach out next week.

Mari Reeves (mari_reeves@fws.gov)
2020-12-11 17:43:50

*Thread Reply:* Hi all - I'm new here, but am interested in this conversation for a few reasons. We have a thermal videography study funded for small bats (Pacific Sheath Tails) on Aguiguan in the Marianas. I think my colleagues at USGS in Colorado have been using a differencing algorithm to detect bats in video, and can catch up with them to see where they are at with that, per your question. But I also just got a project monitoring Boobies in Hawaii with a long series of mostly empty camera trap data, where there are intermittent Boobies, but fixed decoys in the images, so I think anything I train to pick out a booby is going to also pick out the decoys, so I need a way to pull them out and only select photos with the real boobies in them. I am interested to read Sara et al.'s paper and have a flexible schedule if there's any way you'd be willing to let me sit in on your conversation?

👍 Mikey Tabak
Sara Beery (sbeery@caltech.edu)
2020-12-11 17:47:02

*Thread Reply:* We already talked, but re: the boobies...if the cameras are static can you just label where in pixel space the decoys are and throw out any detections with high IoU with the decoy locations?

Sara Beery (sbeery@caltech.edu)
2020-12-11 17:48:05

*Thread Reply:* And I'm sure @Mikey Tabak would love to chat with your colleagues at USGS, that sounds like a very similar challenge!

Dan Morris (agentmorris@gmail.com)
2020-12-01 09:51:35

New aerial image dataset (~21k point annotations on seabirds) added to LILA:

http://lila.science/datasets/aerial-seabirds-west-africa/

LILA BC
Written by
lilawp
Est. reading time
1 minute
👍 gvanhorn, Sara Beery, Subhransu Maji, Ritwik
🐦 gvanhorn, Elijah Cole (Deactivated), Elizabeth Bondi, Sara Beery
Jenna James (jennaj@vulcan.com)
2020-12-01 14:29:45

Hi there, my name is Jenna. I am a User Experience Designer at Vulcan Inc. supporting a project on dolphin signature whistle annotation. I am interested in gathering more information and gaining a better understanding about the marine mammal bioacoustics community, specifically acoustic dataset processes and annotations.

All information collected will be used for research purposes to help inform how we can best support this community. The survey should take around 10 minutes.

Thank you for taking the time to fill out this survey. Your input is greatly appreciated! Please feel free to share this survey.

https://forms.gle/jvRXh3kjYHKPqUZ78

Cheers, Jenna, User Experience Designer at Vulcan Inc.

Google Docs
🐬 Sara Beery, Lily Xu, Zac Winzurk
Sara Beery (sbeery@caltech.edu)
2020-12-01 14:30:43

*Thread Reply:* @Holger Klinck

Mari Reeves (mari_reeves@fws.gov)
2020-12-11 17:53:37

Hi Everyone - I'm new here, I work on threatened and endangered species with the US Fish and Wildlife Service in the Pacific Islands. I have several projects involving camera trap and drone imagery for monitoring wildlife behavior and performing counts. In the process of labeling a dataset for training an object detector model, the question came up... how should you handle uncertainty about objects in your images. I have a collaborator using VoTT to tag images of breeding birds on the island of Lehua. There are birds that are pretty obvious, but there are also alot of rock outcrops, that look similar to the targets of interest in the pictures. One perspective is that you only want to tag targets you are certain about, but I am planning to use a YOLO object detector, and I think this will confuse the model and lead to poor predictions because this model does learn from context in the empty parts of pictures. So @Dan Morris had the idea to assign two separate tags and weight them differently in the modeling process (one for "definitely bird", and another for "maybe a rock" kind of thing), but he also suggested I throw this question out to the group here. I can't seem to find much discussion about the experimental design aspects of annotation, it's more nuts and bolts. Your feedback is appreciated. I've attached one of our tagged drone photos, so you can see the objects of interest. They are the white dots in this image.

🕊️ Sara Beery, Lily Xu
Sara Beery (sbeery@caltech.edu)
2020-12-11 17:58:28

*Thread Reply:* @gvanhorn this reminds me a bit of some of what you were talking about this morning about hard/unlabeled stuff in your audio data, any thoughts?

To me I guess an important first thing to think about is what types of errors would be worse in this scenario (overpredicting birds vs. missing birds), and if you can have human experts address the challenging cases?

Mari Reeves (mari_reeves@fws.gov)
2020-12-11 18:00:22

*Thread Reply:* I think it's worse to miss birds, since the ultimate objective is to develop models to identify them to species level if possible and count them, and my other fear is that they look so similar, it will actually confuse the model (resulting in worse performance) as it tries to distinguish between the two.

Mari Reeves (mari_reeves@fws.gov)
2020-12-11 18:02:41

*Thread Reply:* I also was a little concerned when I read that YOLO models sometimes don't perform as well on small targets (which is obviously what we have here), so if anyone has used these models specifically to do a similar project, I am interested in your results.

Sara Beery (sbeery@caltech.edu)
2020-12-11 18:04:12

*Thread Reply:* This is a drone image, do you have a camera trap example? Are you thinking of training a joint model or separate models for the camera traps and the drones?

Ben Weinstein (benweinstein2010@gmail.com)
2020-12-11 19:06:31

*Thread Reply:* In my experience with trees, its always better to include everything, or else you risk training your detector to ignore potential objects that look similar to the objects you are trying to predict. Depends a bit whether recall or precision is more important for your question.

👍 Sara Beery, Mari Reeves
Ben Weinstein (benweinstein2010@gmail.com)
2020-12-11 19:06:59

*Thread Reply:* I am also working on data preprocessing using deep autoencoders where you first screen your data for potential annotation anomalies.

Ben Weinstein (benweinstein2010@gmail.com)
2020-12-11 19:07:27

*Thread Reply:* so my recommendation is include, since you can always work to throw out later. You can never recover annotations you don't mark.

Ed Miller (ed@hypraptive.com)
2020-12-11 20:05:34

*Thread Reply:* In the dlib labelling tool, ImgLab, you can mark regions to ignore. The dlib object detector training (MMOD) will not use these boxes for training, and it will also not use these areas for negative examples. I'm not sure if Yolo ot other object detectors have a similar mechanism.

Mari Reeves (mari_reeves@fws.gov)
2020-12-11 20:37:31

*Thread Reply:* @Sara Beery the camera traps and drone images are different projects. I am not aware that we have any co-collected data of these different types.

👍 Sara Beery
Mari Reeves (mari_reeves@fws.gov)
2020-12-11 20:38:54

*Thread Reply:* @Ed Miller and @Ben Weinstein thanks for the feedback, interesting.

John Payne (drjohnpayne@gmail.com)
2020-12-12 01:39:38

*Thread Reply:* It may depend on what your purpose is. I’m working on object recognition for aerial survey images, the purpose being to derive population estimates from counts of animals on transects. Since we need to assess the bias, precision and accuracy of our population estimates, there is no substitute in our case for hand-checking a large set of randomly-selected images so that we can estimate both a false-positive and a false-negative (missed animal) rate for the neural network model. For jobs where it isn’t as important to understand bias, precision and accuracy, you may be able to get away without manual checking. Multi-model comparisons, Bayesian updating, or other clever automated methods may be able to replace hand-checking in some cases, but human observers are still the gold standard when you’re on the wild frontier and the abilities of models haven’t been thoroughly assessed.

Ben Weinstein (benweinstein2010@gmail.com)
2020-12-12 10:54:11

*Thread Reply:* @John Payne what are you working on? We should chat, sounds basically like the project we have in the everglades on wading bird population dynamics from drone surveys. http://tree.westus.cloudapp.azure.com/everglades/

John Payne (drjohnpayne@gmail.com)
2020-12-12 13:27:01

*Thread Reply:* Hi @Ben Weinstein, I have two survey projects; one with Howard Frederick on aerial surveys in Tanzania and another with a colleague who is using drones in Kazakhstan. I look forward to talking — it would be great to get a conversation going around these issues.

Mari Reeves (mari_reeves@fws.gov)
2020-12-14 14:01:23

*Thread Reply:* @Ben Weinstein @John Payne I would love to chat with you both about methods and study systems. Although we are working towards using drone surveys of breeding birds on Lehua, our refuges program has a drone and there is just alot of interest in using this technology to conduct surveys. An intro meeting would be great.

Sara Beery (sbeery@caltech.edu)
2020-12-14 14:03:40

*Thread Reply:* @Benjamin Kellenberger is my go-to expert on animal surveys in drone data, I'd recommend bringing him in to the meeting as well!

John Payne (drjohnpayne@gmail.com)
2020-12-14 14:16:45

*Thread Reply:* @Mari Reeves I look forward to hearing about your project. @Sara Beery That’s a good idea. We’re in communication with Beni and have been experimenting with his AIDE program. I’ll send out an email to organize a call.

Nathaniel Rindlaub (nathaniel.rindlaub@tnc.org)
2020-12-14 18:01:28

*Thread Reply:* Hey all - wow great timing. I work for The Nature Conservancy and we’re about to embark on an aerial data labeling project (sea bird census around Palmyra). We’re currently trying to evaluate tools for processing and labeling large geoTIFF orthomosaics; I hopped on this channel to see if anyone had any recs for workflows/software to assist and this conversation has already been super helpful. I didn’t know about AIDE or ImgLab. Does anyone have any other recommendations for drone data labeling tools we should check out?

Howard L Frederick (simbamangu@gmail.com)
2020-12-16 02:57:55

*Thread Reply:* @Nathaniel Rindlaub are you looking for automated processing or manual or a mix thereof?

Nathaniel Rindlaub (nathaniel.rindlaub@tnc.org)
2020-12-16 10:45:35

*Thread Reply:* @Howard L Frederick we're looking to develop a model that will assist in a manual review process, so a mix I suppose. However we're just in the early stages of this and are trying to figure out a workflow for building up a training dataset at the moment.

Ben Weinstein (benweinstein2010@gmail.com)
2020-12-16 15:16:33

*Thread Reply:* We have a private Zooniverse project up and running for everglades wading birds.

Ben Weinstein (benweinstein2010@gmail.com)
2020-12-16 15:16:37

*Thread Reply:* Works great and is easy.

Ben Weinstein (benweinstein2010@gmail.com)
2020-12-16 15:19:32

*Thread Reply:* also, btw, starting from a pretrain model is super useful for transfer learning. For the birds we start from a tree model. Shameless plug for https://deepforest.readthedocs.io/en/latest/

👍 Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2020-12-16 15:21:32

*Thread Reply:* and once the first paper comes out i can share the egret label model, which should be useful for other nesting birds.

Nathaniel Rindlaub (nathaniel.rindlaub@tnc.org)
2020-12-16 19:50:00

*Thread Reply:* @Ben Weinstein cool! Very helpful - I'll be sure to check out Zooniverse, and I'd love to try out the egret label model when it's ready. Also interesting that a model trained to detect trees is a good place to start for finding birds sitting in them!

I am just getting up to speed on our project (@Vienna Saccomanno can speak to all of this in more detail) but I believe we are trying to work towards a (1) a detection model that can pick out birds both roosting in palm trees and floating on the water... and then (2) likely a classifier that can classify them as either blue or red footed boobies.

Vienna Saccomanno (v.r.saccomanno@tnc.org)
2020-12-16 20:21:33

*Thread Reply:* @Ben Weinstein this is incredibly helpful and interesting. For our work on Palmyra to census four different species of seabirds, we are mostly focused on the birds in the canopy (mostly Pisonia Heliotropium) and I am intrigued at your use of DeepForest. We've been concerned about animals partially covered by the canopy, so I'm very much looking forward to your first paper. Automated processing is the goal, but we are aware that some level of manual review might be necessary.

Sara Beery (sbeery@caltech.edu)
2020-12-16 20:29:57

*Thread Reply:* I'm a pretty strong believer that all AI for conservation projects should have some built in, ongoing manual review, if nothing else than to verify difficult data and do quality control! My hope is that process would be much more lightweight than reviewing all the data, but it would help catch any obvious issues and make sure your model is maintaining accuracy over time

Vienna Saccomanno (v.r.saccomanno@tnc.org)
2020-12-16 20:42:24

*Thread Reply:* Thanks for that, @Sara Beery. I think that is right and looking forward to learning more.

Howard L Frederick (simbamangu@gmail.com)
2020-12-17 00:13:13

*Thread Reply:* @Nathaniel Rindlaub I processed a few big orthos not long ago using ImageMagick to cut the image into tiles then giving annotators folders of images in labelImg; I’d probably do it in CVAT if I repeat the process since you can manage multiple users, and overlap images between individual annotators to check consistency.

Website
<p><a href="https://youtu.be/p0nR2YsCY_U">https://youtu.be/p0nR2YsCY_U</a></p>
Stars
<p>12791</p>
Nathaniel Rindlaub (nathaniel.rindlaub@tnc.org)
2020-12-17 10:48:00

*Thread Reply:* @Howard L Frederick amazing! Thanks for your recs! I'll definitely check them out.

Sara Beery (sbeery@caltech.edu)
2020-12-17 10:52:38

*Thread Reply:* @gvanhorn has some awesome work on how to intelligently & efficiently combine annotations (including bboxes) from sets of annotators. I used it when building the Caltech Camera Traps dataset. https://openaccess.thecvf.com/content_cvpr_2018/html/Van_Horn_Lean_Multiclass_Crowdsourcing_CVPR_2018_paper.html

Howard L Frederick (simbamangu@gmail.com)
2020-12-17 11:55:22

*Thread Reply:* @Nathaniel Rindlaub there are some reviews of annotation software linked here: • https://medium.com/tektorch-ai/best-image-labeling-tools-for-computer-vision-393e256be0a0https://lionbridge.ai/articles/image-annotation-tools-for-computer-vision/https://hackernoon.com/the-best-image-annotation-platforms-for-computer-vision-an-honest-review-of-each-dac7f565fea

Medium
Reading time
5 min read
Lionbridge AI
hackernoon.com
🙌 Nathaniel Rindlaub
Howard L Frederick (simbamangu@gmail.com)
2020-12-17 11:56:03

*Thread Reply:* … however there is a wide variety of features and I have yet to find one that does everything I want. Some don’t zoom, most don’t deal with teams, some have very heavy web/cloud requirements (vs. in-house) …

Koustubh Sharma (koustubh@snowleopard.org)
2020-12-15 01:15:15

Hi all, thrilled to be here! Thank you @Dan Morris for the kind invite. We work with snow leopards, and recently partnered with Microsoft who developed a cool tool for us to scan through thousands of images of snow leopards from camera traps for analysis using spatial capture recapture. Here to learn and explore the rapidly growing overlap between mathematical statistics, computer science, ecology and biology, and technology! My 30 seconds' worth of claim to fame has been the opportunity to feature on MS AI ad last year 😊: https://www.youtube.com/watch?v=68WheTADA-g&t=7s

YouTube
} Infinetix South Africa (https://www.youtube.com/channel/UCBMWdRWHF5PNxiwfVHq8frQ)
🌍 Omiros Pantazis, Sara Beery, Lauren Gillespie, Manish Rai, Jason Holmberg (Wild Me)
🐆 Carly Batist, Megan Cromp, Ankita Shukla, Stefan Schneider, Ming Zhong, Jason Holmberg (Wild Me), Elizabeth Bondi
😍 Chris Yeh, Jason Holmberg (Wild Me)
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2020-12-16 21:17:55

*Thread Reply:* Nice to meet you, and great video! We're working on individual ID for snow leopards with Pose Invariant Embeddings. Would love to talk further.

👍:skin_tone_4: Koustubh Sharma
Koustubh Sharma (koustubh@snowleopard.org)
2020-12-16 22:29:01

*Thread Reply:* Hi @Jason Holmberg (Wild Me), heard about your team from Dan. Let me know please when a good time to connect!

👍 Jason Holmberg (Wild Me)
Sara Beery (sbeery@caltech.edu)
2020-12-15 16:40:45

New distributional robustness benchmark released today that includes camera trap data! https://twitter.com/sarameghanbeery/status/1338944867037147138

🎉 Jon Van Oast, Olivier Gimenez, gvanhorn, Caleb Robinson
👍 Benjamin Kellenberger, Caleb Robinson
Ben Weinstein (benweinstein2010@gmail.com)
2020-12-16 15:09:38

@gvanhorn , @Christine Kaeser-Chen others, any advice for screening training data for label confusion in fine-grained classification? I'm building an autoencoder to try to find outliers (HSI data on trees), to look for probable cases where one tree has been labeled e.g. 'black oak', but the spectral signature clusters better with 'white oak' (80 species). Right now i'm doing 1) Train stacked autoencoder (https://github.com/weecology/DeepTreeAttention/blob/c7e39b0402943076caa0d151272b34cf36738430/DeepTreeAttention/generators/cleaning.py#L14), 2) Define outliers as >95th quartile of the reconstruction error. The whole idea works great for finding outliers compared the entire dataset (like this image where the labeled trees were since cut down...), but I feel like its missing the class information, not just, this sample looks different from all others, but this sample looks different from all other black oaks. Maybe have the autoencoder also predict the class to form sparse embeddings? I'm not sure how to define the loss to make sure the groups are compact.

gvanhorn (grv22@cornell.edu)
2020-12-16 15:16:34

*Thread Reply:* Yeah, autoencoders are a cool way to go for finding outliers in the dataset as a whole. For identifying class label mistakes, there are lots of adhoc tactics out there (which you have probably thought of) all of which require some brute force examination of the data. Cross validation and looking at the entropy of the predictions coming from a classification network is one standard tactic. Essentially like an active learning setup.

Ben Weinstein (benweinstein2010@gmail.com)
2020-12-16 15:17:14

*Thread Reply:* Okay, so it would be a bad idea to concat the actual class to the autoencoder and try to get it to predict both the image and the label?

gvanhorn (grv22@cornell.edu)
2020-12-16 15:24:52

*Thread Reply:* Maybe worth a shot. Depends on how much time you want to spend on the problem/coding/debugging. If you have prior knowledge about which classes might get confused, it might be quickest to just browse through that subset looking for mistakes. With enough data, I typically embrace the noise in the training set and worry more about the quality of the evaluation set.

👍 Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2020-12-16 15:27:41

*Thread Reply:* thanks. I appreciate it. Its an interesting problem because its not traditional mislabeling in the sense that a annotator thought it was a sharp-shinned hawk and it was a cooper's, but more subtle. Ground truth is captured by people marking the tree stems with points. So there is alot of guesswork about what tree crowns are visible for a canopy. In this example, we have our detection algorithm (blue box) predict one tree, but there are two species ground truth underneath. One correctly predicted, one incorrectly predicted. Obviously because there is only one crown. Looking for an algorithmic way to anticipate these sceneries. But yes, hand review!

gvanhorn (grv22@cornell.edu)
2020-12-16 15:30:06

*Thread Reply:* Is there a box associated with each dot? (one-to-one?)

Ben Weinstein (benweinstein2010@gmail.com)
2020-12-16 15:30:34

*Thread Reply:* nope! ground truth gets a point. The box is predicted from my tree detection model. Fun, right?

Ben Weinstein (benweinstein2010@gmail.com)
2020-12-16 15:30:52

*Thread Reply:* so you have to choose with pixels go with each point for training.

gvanhorn (grv22@cornell.edu)
2020-12-16 15:31:29

*Thread Reply:* hmm, interesting. Your ultimate goal is boxes or points?

Ben Weinstein (benweinstein2010@gmail.com)
2020-12-16 15:32:03

*Thread Reply:* that's an interesting question. boxes i suppose, since that entire thing is a tree.

Ben Weinstein (benweinstein2010@gmail.com)
2020-12-16 15:32:08

*Thread Reply:* no one has ever asked me that before.

gvanhorn (grv22@cornell.edu)
2020-12-16 15:34:57

*Thread Reply:* No idea if this is helpful, but have you seen this work? https://arxiv.org/abs/1602.08405 https://arxiv.org/abs/1708.02750

arXiv.org
arXiv.org
gvanhorn (grv22@cornell.edu)
2020-12-16 15:35:26

*Thread Reply:* Not sure if it is related, but your example figure reminded me of those works

Ben Weinstein (benweinstein2010@gmail.com)
2020-12-16 15:36:08

*Thread Reply:* totally, thanks for the ideas.

Ben Weinstein (benweinstein2010@gmail.com)
2020-12-16 15:36:32

*Thread Reply:* @Sara Beery I see you lurking on my tree problems.

Sara Beery (sbeery@caltech.edu)
2020-12-16 15:39:27

*Thread Reply:* @Ben Weinstein always lurking 🙂

Ben Weinstein (benweinstein2010@gmail.com)
2020-12-16 15:39:42

*Thread Reply:* 🙂

Sara Beery (sbeery@caltech.edu)
2020-12-16 15:40:01

*Thread Reply:* But also I'm running into the same issues so I'm hoping you just solve them for me ☺️

John Payne (drjohnpayne@gmail.com)
2020-12-16 15:42:33

*Thread Reply:* Ben I think your question of whether “this black oak is different from all other black oaks” is an important one. This is kind of an inelegant, brute-force suggestion, but how about running the autoencoder on one class at a time (I mean in addition to running it on the whole dataset)? Would that be prohibitively expensive?

Sara Beery (sbeery@caltech.edu)
2020-12-16 15:43:16

*Thread Reply:* I was actually kinda thinking the same @John Payne but it might be the case that you wouldn't have enough data per-class?

Ben Weinstein (benweinstein2010@gmail.com)
2020-12-16 15:44:58

*Thread Reply:* i mean, definitely not enough data. But no, easy to do, no computational challenge, we have enormous HPC resources. I'm not entirely sure its different than reconstructing all images, plus the label. So your autoencoder has two heads and you take the joint loss.

Sara Beery (sbeery@caltech.edu)
2020-12-16 15:48:02

*Thread Reply:* I guess that is probably theoretically similar, and might help with the lack of data, but in the per-class sense you're letting the autoencoder learn only relevant features for reconstructing a specific class, which might let it separate out the clusters a bit more? As opposed to needing to learn to reconstruct all species?

Ben Weinstein (benweinstein2010@gmail.com)
2020-12-16 15:49:32

*Thread Reply:* i'll give it a try today. writing it now.

John Payne (drjohnpayne@gmail.com)
2020-12-16 21:22:08

*Thread Reply:* I had another idea; not sure if it’s any good. You may have seen this recent paper on anomaly detection: https://arxiv.org/abs/1911.10676. They train an autoencoder to restore degraded images (rotated or color loss), which forces it to learn semantic features of the class (they run it on one class at a time). Then during inference, normal (non-anomolous) images can be restored properly but anomalous images that are missing the proper semantic features will not restore properly and so they cause large restoration losses. Their method performs extremely well on several anomaly detection tests.

So: I wonder if you could use that method, but also include the label as part of the input: basically pass a one-hot encoded vector of class labels to the autoencoder along with the image. During training, it would learn to tell the classes apart and how to restore them. Then during inference, if the image was a good example of the class, presumably it would restore properly, but if it was labeled wrong then the model might choose the wrong set of features to restore. I suppose you might need a fairly deep model if you had a lot of classes to learn, but in the paper they mention a Resnet 34, so perhaps it’s not out of reach?

arXiv.org
👍 Sara Beery
Vienna Saccomanno (v.r.saccomanno@tnc.org)
2020-12-16 19:07:05

Greetings everyone - I just joined this slack channel (thanks for the invite @Sara Beery and @Nathaniel Rindlaub). I am a scientist at The Nature Conservancy and I recently began a project focused on using machine learning models to detect and count sea birds in drone imagery. I am somewhat new to the AI space and look forward to learning from the people in this community.

🎉 Sara Beery, Jason Holmberg (Wild Me), Carly Batist
🤗 Nathaniel Rindlaub, Jason Holmberg (Wild Me), Sara Beery
Sara Beery (sbeery@caltech.edu)
2020-12-16 19:07:45

*Thread Reply:* Welcome!!!

Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2020-12-17 03:36:28

*Thread Reply:* Welcome indeed! 😄

Carly Batist (cbatist@gradcenter.cuny.edu)
2020-12-17 10:51:28

*Thread Reply:* Great to have another person with a science background getting into the AI/ML stuff! We can commiserate together 😅 though I work with mammals not birds..

Ben Weinstein (benweinstein2010@gmail.com)
2020-12-17 14:27:11

Thanks everyone who joined the wildlife drones meeting. Just placing a few links in this thread. @Benjamin Kellenberger can you link to those papers and AIDE for later reference.

Ben Weinstein (benweinstein2010@gmail.com)
2020-12-17 14:28:34

*Thread Reply:* Our pipelines uses Agisoft for orthorectification, Zooniverse for annotation, deepforest for transfer learning (https://deepforest.readthedocs.io/), R shiny for web dashboard (http://tree.westus.cloudapp.azure.com/everglades) hosted on azure VM instance.

Howard L Frederick (simbamangu@gmail.com)
2020-12-17 14:56:46

*Thread Reply:* Very interesting discussion - needed another 4h or so …

Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2020-12-18 04:58:52

*Thread Reply:* Hi Ben, Thanks a lot for making the meeting of yesterday happen! It was great to see and get to know you all. Here’s some links to my works:

  1. Detecting animals in drone images and using as much of the background for training as possible: https://arxiv.org/pdf/1806.11368.pdf
  2. Weakly-supervised object detection (“image has/doesn’t have animals, localize them for me”): http://openaccess.thecvf.com/contentCVPRW2019/papers/EarthVision/KellenbergerWhena[…]ifferenceImprovingWeakly-SupervisedCVPRW2019_paper.pdf
  3. AIDE paper: https://besjournals.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/2041-210X.13489
  4. AIDE GitHub link: https://github.com/microsoft/aerial_wildlife_detection
Stars
<p>92</p>
Language
<p>Python</p>
Casey Youngflesh (caseyyoungflesh@gmail.com)
2020-12-17 16:02:49

Hello, everyone! I’m new to the AI for Conservation Slack and thought I’d introduce myself. I’m a quantitative ecologist and postdoc at UCLA, broadly interested in how ecological systems are responding to global change (particularly with regard to phenology and population dynamics). Methodologically, most of what I do involves applying hierarchical Bayesian models to ‘large-scale’ ecological data. I’m fairly new to AI but currently working on a project to identify animals in satellite imagery. Looking forward to connecting with the community!

🌍 Sara Beery
👋 Jon Van Oast
🙌 Mari Reeves
Katie Breen (cbreen@uw.edu)
2020-12-17 18:17:45

Hello! Thank you @Dan Morris for the invite! I am a PhD student at the University of Washington working on identifying snow and storms for wildlife application from a large-scale camera network based in Norway. I am new to AI and the community, and grateful to Dan and @Sara Beery for opportunities to learn, like Sara's awesome tutoring session on building your own model this past summer with WildLabs. I like running and cooking, and I look forward to connecting more with all of you!

🎉 Sara Beery, Lily Xu
❄️ Sara Beery
👋 Jon Van Oast
🙌 Mari Reeves
Ben Weinstein (benweinstein2010@gmail.com)
2020-12-17 18:32:54

*Thread Reply:* Hey Katie, I assume you are working Laura Prugh at UW? If not that project is really similar!

👋 Katie Breen
Katie Breen (cbreen@uw.edu)
2020-12-18 15:55:36

*Thread Reply:* Hey Ben, Yes that's the one! I am Laura Prugh's student. Currently the model is a (pretty simple) custom network, but I am looking for ideas to improve the accuracy. Would love to connect after the holidays if you have time!

Kim Goetz - NOAA Federal (kim.goetz@noaa.gov)
2020-12-18 03:09:30

Hi All, Thanks to @Dan Morris for the invite! I am a marine mammal spatial ecologist at NOAA and I am currently working on a project to detect whales in satellite imagery on an operational scale. We are starting with two endangered whale species but are getting a little hung up with some of the details. Our plan is to create an annotated image library that can feed into ML processes. We realize this is a bit of a 'choose your own adventure' in terms of who you use for the various aspects of this project (cloud storage, image tiling, annotating, ML) so are hoping to use this forum as a way to solicit advice from the bigger group. For example, to create an annotated dataset, you can use a myriad of programs - ie viame, picterra, VGG- but some of these have issues with dealing with satellite imagery. In addition, ML usually requires bounding boxes as opposed to dots so how big do you make a box around the object? Additionally we need to export the geographic coordinates of each bounding box, not pixel coordinates. If anyone has any thoughts or advise please share =).

🐋 Sara Beery
🙌 Mari Reeves
Nathan Hahn (nhahn@colostate.edu)
2020-12-18 13:29:32

Hello! I am an ecology PhD student at Colorado State University, and through my own research using a variety of camera trapping and GPS tracking tools have been pulled into the conservation tech world. Currently, I am working on a project to look at the interface between conservation practitioners and engineers, in terms of both feature importance and the collaborative process of developing and implementing technologies in practice. I have developed a short questionnaire around these topics, which should take ~6 minutes to fill out. We have received a lot of responses from practitioners, but are looking to get more thoughts from those on the tech side. Here is a link to the survey: http://colostate.az1.qualtrics.com/jfe/form/SV_7WBiscDDocCYIbb. If any of you have additional thoughts please share!

colostate.az1.qualtrics.com
👋 Oisin Mac Aodha, Sara Beery
✅ Carly Batist
Ed Miller (ed@hypraptive.com)
2020-12-18 17:46:56

*Thread Reply:* Hi @Nathan Hahn, I responded to the survey. As a technologist with no formal conservation background, I found many of the questions were not applicable for me. Many were more suited to the conservation scientist I collaborate with.

Howard L Frederick (simbamangu@gmail.com)
2020-12-19 06:06:36

*Thread Reply:* Hi @Nathan Hahn great topic - the ecologist / engineer interface is a really important often often blocking link. Keen to hear more.

Armin Bazarjani (bazarjan@usc.edu)
2020-12-21 17:23:52

Hey all I’m new to the group so I apologize if this isn’t the correct channel for something like this. I’m currently finishing up my Masters in Electrical Engineering at USC and I’m very interested in computer vision. I would love to be able to apply that interest to my budding passion of sustainability and conservation.

Because I am thinking of applying to PhD programs next year I would love to get more research experience in this area, so I can then apply my own research to it in the future! So, if any of you are looking for volunteers or know of people who could use an extra hand, please let me know!

🎉 Sara Beery, Tony Chang
Heather Lynch (heather.lynch@stonybrook.edu)
2020-12-22 16:02:39

*Thread Reply:* Dimitris Samaras and I collaborate on a number of problems in the area of computer vision and conservation, might be worth considering doing a PhD with him (https://www3.cs.stonybrook.edu/~samaras/). His lab does a lot of interesting things applying computer vision to both health and environmental applications.

👍 Armin Bazarjani
Armin Bazarjani (bazarjan@usc.edu)
2020-12-22 16:58:00

*Thread Reply:* Thanks for the suggestion @Heather Lynch, I'll check his lab out!

Tony Chang (tony@csp-inc.org)
2020-12-25 16:12:52

*Thread Reply:* @Armin Bazarjani we are working with on a suite of projects and would love to chat if you are looking for projects to gain some experience in sustainability and conservation.

Sarra Alqahtani (sarra-alqahtani@utulsa.edu)
2020-12-23 12:24:47

Hi all! I'm new to this group and I would like to introduce myself. I'm Sarra Alqahtani, a computer science assistant professor at Wake Forest University. I study the land change in Amazonian rainforests using computer vision, deep learning with imagery taken by UAVs and from different satellites. I currently got a huge fun from NASA and I'm looking for a postdoc to work with me and 3 other professors, 3 undergraduates. Please if you are interested, talk to me directly. Thanks!

🎉 Sara Beery, Tony Chang, Amrita Gupta
👍 Carly Batist
Armin Bazarjani (bazarjan@usc.edu)
2020-12-23 19:40:15

*Thread Reply:* That’s awesome, excited to see what you do!

Ben Weinstein (benweinstein2010@gmail.com)
2020-12-25 16:04:25

*Thread Reply:* @Sarra Alqahtani in January, let's talk about your needs and how we can be helpful. My lab works on tree detection/species classification. Mostly in temperate zones, but have a paper in review in French Guiana. I was just talking to a colleague in the Ecuadorian amazon on potential applications there. See recent pubs for links, python packages, https://scholar.google.com/citations?hl=en&user=7POnELAAAAAJ&view_op=list_works&sortby=pubdate, https://github.com/weecology/DeepTreeAttention, http://visualize.idtrees.org/

Stars
<p>19</p>
Language
<p>Python</p>
Ben Weinstein (benweinstein2010@gmail.com)
2020-12-25 16:07:18

*Thread Reply:* also @John Brandt and @Tony Chang for their experience on multi-temporal satellite fusion for land cover + deforestation analysis. https://www.mdpi.com/2072-4292/11/7/768

MDPI
Ben Weinstein (benweinstein2010@gmail.com)
2020-12-25 16:09:46

*Thread Reply:* We also are just starting (I can't get planet to return my emails!) work on UAV satellite fusion for wading bird colonies. Would love your input here, as the project is just forming.

Tony Chang (tony@csp-inc.org)
2020-12-25 16:11:02

*Thread Reply:* Thanks for the intro @Ben Weinstein! Congrats Sarra on the NASA grant! Definitely interested in helping, I’ve been working lately on a US scale forest structure project, but could scale globally pending more data. We are focused on using Harmonized Sentinel Landsat data and fusing to other datasets. Haven’t looked into UAV data much, but would be interested to learn more.

Sarra Alqahtani (sarra-alqahtani@utulsa.edu)
2020-12-26 12:25:51

*Thread Reply:* @Ben Weinstein @Tony Chang Thank you so much for the information! Would love to meet with you both to discuss the overlaps between our research.

Ben Weinstein (benweinstein2010@gmail.com)
2020-12-26 12:59:27

*Thread Reply:* great, i'll organize something for January. Keeping @John Brandt and anyone else in mind as well. We did a recent open meetup on drones for wildlife monitoring and i think it was useful. Let's post agenda items of interest here. Immediate thoughts looking at that postdoc ad: 1) introductions, goals, and background, 2) integrating multi-sensor imagery for deep learning classification, 3) training multi-scale (resolution) classification workflows, 4) available datasets for benchmark and annotations. Feel free to add.

Sarra Alqahtani (sarra-alqahtani@utulsa.edu)
2020-12-26 14:06:02

*Thread Reply:* I would include land change/deforestation detection using time series data as unsupervised way of classification.

👍 Ben Weinstein, Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2021-01-06 12:04:16

*Thread Reply:* Okay everyone, let's try to set up a meeting in the next couple days. Anyone is welcome to come discuss the topics listed above, I think @John Brandt @Tony Chang @Sarra Alqahtani are interested. https://whenisgood.net/4k4kphn

whenisgood.net
Tony Chang (tony@csp-inc.org)
2021-01-06 12:06:50

*Thread Reply:* Thanks for setting this up @Ben Weinstein!

John Brandt (John.Brandt@wri.org)
2021-01-06 13:00:06

*Thread Reply:* Hey! Would love to join this. We are working on a massive tree planting project with Mastercard (https://www.mastercard.us/en-us/vision/corp-responsibility/priceless-planet.html) and are starting to do UAV classification work as well as change detection with medium res imagery

mastercard.us
Ben Weinstein (benweinstein2010@gmail.com)
2021-01-07 10:10:55

*Thread Reply:* bump here to fill out times @John Brandt and others

Sarra Alqahtani (sarra-alqahtani@utulsa.edu)
2021-01-07 12:20:56

*Thread Reply:* Sorry for the late action. Just sent my available times

Ben Weinstein (benweinstein2010@gmail.com)
2021-01-07 12:32:40

*Thread Reply:* just confirming the time zone worked for you, tony and I are both west coast, it should have auto show you the correct timezone, I set it to PT. Did you mean 6pm ET?

👍 Ben Weinstein
Sarra Alqahtani (sarra-alqahtani@utulsa.edu)
2021-01-07 13:04:40

*Thread Reply:* Yes ET

Ben Weinstein (benweinstein2010@gmail.com)
2021-01-07 13:05:00

*Thread Reply:* okay, just @John Brandt

Sara Beery (sbeery@caltech.edu)
2021-01-07 15:38:21

*Thread Reply:* I added times in as well, I won't have much to add but I'd love to learn from you all :)

Ben Weinstein (benweinstein2010@gmail.com)
2021-01-07 17:46:45

*Thread Reply:* okay, @Sara Beery, @Tony Chang, @Sarra Alqahtani for the 4 of us Friday 1:30pm PT (tomorrow). I will email John, perhaps he isn't as on slack as we are. I think his feedback will be extremely useful, since he's basically devoted the last few years to this, plus i'm anxious to hear about the status of the WRI/Norway/planet tropical data program.

John Brandt (John.Brandt@wri.org)
2021-01-07 17:48:46

*Thread Reply:* Hey @Ben Weinstein, sorry I'm on vacation right now so I haven't been on Slack much -- I'll be back in the office next week though and I'll catch up with you

🌴 Ben Weinstein, Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2021-01-07 17:49:26

*Thread Reply:* no problem, when you get back drop me a note. I'll see if florence is around.

Ben Weinstein (benweinstein2010@gmail.com)
2021-01-07 17:57:42

*Thread Reply:* okay, looks like she left WRI.

Ben Weinstein (benweinstein2010@gmail.com)
2021-01-08 16:27:56

*Thread Reply:* Benjamin Weinstein is inviting you to a scheduled Zoom meeting.

Topic: Benjamin Weinstein's Personal Meeting Room

Join Zoom Meeting https://ufl.zoom.us/j/4610898573

Meeting ID: One tap mobile

Dial by your location US (Tacoma) US (Houston) US (San Jose) US (Washington D.C) US (Chicago) US (New York) Meeting ID: Find your local number: https://ufl.zoom.us/u/aewlP1nvmx

Join by SIP 4610898573@zoomcrc.com

Join by H.323 162.255.37.11 (US West) 162.255.36.11 (US East) 115.114.131.7 (India Mumbai) 115.114.115.7 (India Hyderabad) 213.19.144.110 (Amsterdam Netherlands) 213.244.140.110 (Germany) 103.122.166.55 (Australia) 149.137.40.110 (Singapore) 64.211.144.160 (Brazil) 69.174.57.160 (Canada) 207.226.132.110 (Japan) Meeting ID:

Join by Skype for Business https://ufl.zoom.us/skype/4610898573

Tony Chang (tony@csp-inc.org)
2021-01-08 17:19:58

*Thread Reply:* Thanks for this meeting! It was great meeting you all!

🙌 Sara Beery
Tony Chang (tony@csp-inc.org)
2021-01-08 17:22:41

*Thread Reply:* @Sarra Alqahtani I’m having a conversation with Caleb Robinson who was working on land-use change questions in the US and he works with a combination of MODIS and NAIP data. Most of what he does is applying epitomic representations for low resolution labels to work in conjunction with high resolution images. Your problem is kind of the reverse case (high resolution labels with low resolution images), but I’ll see if he has any novel solutions. I agree, GANs may not be the best way…but maybe?

Sarra Alqahtani (sarra-alqahtani@utulsa.edu)
2021-01-08 17:33:52

*Thread Reply:* Thanks everyone. It was a very informative meetign. @Tony Chang We are still adjusting our GAN model using downsampled drone images and hopefully we get better results.

Ben Weinstein (benweinstein2010@gmail.com)
2021-01-14 12:08:10

*Thread Reply:* @Sarra Alqahtani see the small blurb in there about gold mining https://maaproject.org/2021/norway-agreement/ this was the dataset I was mentioning, connected to @John Brandt’s work.

MAAP
Ben Weinstein (benweinstein2010@gmail.com)
2021-01-05 11:43:52

Cross-posting: Hi everyone,

We hope everyone had a restful holiday and we're looking forward to seeing you for our next discussion of machine learning for remote sensing applications! We'll kick back off this Friday January 8th at 11am PST / 2pm EST / 8pm CET. Jose Luis Holgado Alvarez (Technical University of Berlin) will present on Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images.

Bring your questions! As always, full details and updated schedule here. Zoom link: https://umd.zoom.us/my/hkerner

🙌 Sara Beery, gvanhorn
👍 Vienna Saccomanno, Casey Youngflesh
Ben Weinstein (benweinstein2010@gmail.com)
2021-01-05 11:44:20

*Thread Reply:* @Sarra Alqahtani, we have a remote sensing machine learning zoom group for biology.

Petar Gyurov (pgyurov93@gmail.com)
2021-01-05 14:19:10

Hi all. For the past couple of months I have been working on a project involving MegaDetector that I am excited to share with you.

I have built a GUI around MegaDetector that speeds up the process of weeding out empty camera trap images. The application detects animals in your photos, then lets you review the output of the model and make corrections. At the end your original images are sorted in folders. It's designed to be relatively simple to use.

I have just published version v0.0.1-alpha which you can download from https://github.com/petargyurov/megadetector-gui/releases If you don't have time to download it and try it, here is a short demo video: 📽️ https://streamable.com/h5pcdu

My next goal is to implement more models and the ability to train a custom model, as well as few other things. Let me know what you think!

Streamable
🙌 Omiros Pantazis, Sara Beery, Ben Weinstein, Olivier Gimenez, David Will
Petar Gyurov (pgyurov93@gmail.com)
2021-01-05 14:20:33

*Thread Reply:* Also, for this project, I did some refactoring the base MegaDetector code, mostly just to improve usability. If you want to use MegaDetector as a CLI or as part of your scripts, you may want to look at my other repo https://github.com/petargyurov/megadetector-api

Language
<p>Python</p>
Last updated
<p>8 days ago</p>
Sara Beery (sbeery@caltech.edu)
2021-01-05 14:22:14

*Thread Reply:* This is amazing!!!

❤️ Petar Gyurov
Sara Beery (sbeery@caltech.edu)
2021-01-05 14:30:12

*Thread Reply:* Have you thought about adding the ability for the reviewer to box animals that weren't found initially? Might be useful for finetuning project-specific models down the line.

Petar Gyurov (pgyurov93@gmail.com)
2021-01-05 14:33:57

*Thread Reply:* @Sara Beery Yes, that is something I want to add but it involves quite a bit of work. It's on the todo list 🙂

👍 Sara Beery
Caleb Robinson (calebrob6@gmail.com)
2021-01-05 17:19:56

*Thread Reply:* @Siyu Yang

❤️ Sara Beery
Hannah Yin (hannah.yin@rice.edu)
2021-01-05 21:57:02

Hi everyone, and thank you @Sara Beery for the invite! I'm Hannah, a senior undergrad student double-majoring in Biology and Computer Science at Tufts University. Super grateful and excited to find an active community here bringing much needed technical guidance to ecology, conservation, and climate change response strategies. Looking forward to exploring how I can put my cs training to good use in these areas and learning from you all. I am fairly new to AI and just completed an introductory course in machine learning last semester. Much of my experience outside the classroom so far comes from working on research projects in wild vertebrate stress physiology and eco-evolutionary processes using empiricism (lab work and data) and theory (equations and biological concepts). That said, I do have some experience in the tech industry, albeit not directly related to bio or AI. Happy to chat and help when/where I can. Looking through the Wild Me documentation is already adding to my understanding of software design out in the real world. 😃

🎉 Sara Beery, Lily Xu
:bearid: Ed Miller
Sarra Alqahtani (sarra-alqahtani@utulsa.edu)
2021-01-07 12:23:21

@Hannah Yin Happy to have you here. I would love to have you in my research group if you are willing to work remotely with my team. I have a group of wonderful undergraduates who taught themselves ML and RL and I think you will enjoy working with them. Please email me if you are interested: alqahtas@wfu.edu

Sara Beery (sbeery@caltech.edu)
2021-01-11 13:32:08

From the Computational Sustainability network:

CLAIRE Network is collaborating with AIhub.org on a focus series on “AI for Good: UN sustainable development goals”. This series highlights AI research relating to the UN sustainable development goals (SDGs). Each month we focus on a different UN SDG. The first SDG topic, released in January, was SDG: Good health and well-being.  You can find the articles published so far collected here: https://aihub.org/tag/focus-on-good-health-and-well-being/ * * The next topic will be SDG: Climate action and they are looking for researchers who work in AI relating to climate action who are interested in writing or talking about their work. You could write (or describe on video) a short blog post about your research, write an opinion piece about an area of current interest, give a tutorial, or take part in an interview. If you are interested, please do get in touch at aihuborg@gmail.com. The deadline for contributions is 12 February 2021.”

aihub.org
👍 Armin Bazarjani, Mikey Tabak
Ben Weinstein (benweinstein2010@gmail.com)
2021-01-12 11:50:45

I've held 4 or 5 meetings derived from this channel over the last two weeks. I want to thank everyone for being involved and growing momentum. One note might be that we as a community (understandably) focus alot on the challenges and obstacles to bringing AI to biological monitoring. I created a thread in #random for people to drop in images of cool successes they have had over the last year. I hope 2021 is a great year for us.

💚 Lukas Liebel, Sara Beery, Caleb Robinson, Jon Van Oast, Talia Speaker
👍 Armin Bazarjani, Caleb Robinson, Benjamin Kellenberger, Sarra Alqahtani, Mikey Tabak
🙌 Nathaniel Rindlaub
Sara Beery (sbeery@caltech.edu)
2021-01-12 19:36:15

New hub for climate datasets, with an AI focus: http://mldata.pangeo.io/

😎 Jon Van Oast, Jason Holmberg (Wild Me)
☀️ Ed Miller, Jason Holmberg (Wild Me)
🌎 Armin Bazarjani, Oisin Mac Aodha, Omiros Pantazis, Carly Batist
👍 Benjamin Kellenberger
Ben Weinstein (benweinstein2010@gmail.com)
2021-01-14 21:34:36

Does anyone in channel use more than the standard DJI quadcopters for large scale surveys? @John Payne? Looking to spend alot more $ to be a more professional drone for large scale mapping for our everglades bird colony project. Wingtra drones?

Howard L Frederick (simbamangu@gmail.com)
2021-01-14 22:25:32

We have used microlights with wing mounted cameras! 3h+ endurance, triple the speed of quadcopter. Payload over 10kg. And no regulatory issues around use of UAV outside line of sight.

😎 Jon Van Oast, Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2021-01-14 22:33:33

*Thread Reply:* thanks. I'd love to stop paying pilots, but yes, a good idea.

Ben Weinstein (benweinstein2010@gmail.com)
2021-01-14 22:36:31

*Thread Reply:* i didn't mention, we also fly in a CESNA fixed wing over the everglades.

Ben Weinstein (benweinstein2010@gmail.com)
2021-01-14 22:36:36

*Thread Reply:* its old, and a bit scary.

Ben Weinstein (benweinstein2010@gmail.com)
2021-01-14 22:38:47

*Thread Reply:* The thing going in our favor is that we are looking at wading bird colonies, which we tend to know their general location. We think (hope) that 90% of the nesting birds are in these known areas. Less spread out than many other wildlife monitoring programs.

Howard L Frederick (simbamangu@gmail.com)
2021-01-15 00:36:49

*Thread Reply:* @Ben Weinstein what sort of area do you need to cover? What is the mission profile like?

John Payne (drjohnpayne@gmail.com)
2021-01-15 01:29:15

*Thread Reply:* I don’t know Wingtra, but I think you’re on the right general track; fixed-wing drones are really the only option for covering distance. It’s been a while since I looked, but depending on how much you can spend you might look at a UAVFactory Penguin B or C, (https://uavfactory.com/en/penguin-c-uas) which blew away endurance records, or the UAS drones (http://www.ua-sp.com/c-astral) which have much less endurance but are aimed at people needing very precise positioning. If you can afford to spend as much as a modest house would cost, there are even better options :)

uavfactory.com
ua-sp.com
Howard L Frederick (simbamangu@gmail.com)
2021-01-15 01:30:37

*Thread Reply:* @John Payne what sort of endurance / speed / payload do those have? #too-lazy-to-read-link

John Payne (drjohnpayne@gmail.com)
2021-01-15 01:42:30

*Thread Reply:* For the penguin B: 10 kg, 25-50 hours(!), cruise speed of 43 knots, wide variety of payloads available including gyro-stabilized. Astral have only about a 3 hour range, but lots of available cameras including serious multispectral stuff and LIDAR.

Howard L Frederick (simbamangu@gmail.com)
2021-01-15 23:10:56

*Thread Reply:* I’ve asked UAVFactory for the cost of a basic unit. Those sorts of speed/endurances mean we could cover 800km of flight lines (transects) in a day, which is meaningful. Would still be nearly impossible to get clearances for a large drone here in Tanzania but I’m keen to see what the possibilities are.

Howard L Frederick (simbamangu@gmail.com)
2021-01-18 08:37:39

*Thread Reply:* … aaand UAVFactory is so far quite elusive about what their systems actually cost …

John Payne (drjohnpayne@gmail.com)
2021-01-18 15:03:44

*Thread Reply:* From what I saw it’s around $15K to $20K for a basic system, but obviously depends on the extremely varied set of options chosen; esp. camera systems — I’ll be interested to hear what you find out.

Ben Weinstein (benweinstein2010@gmail.com)
2021-01-18 15:26:01

*Thread Reply:* one thing we are really worried about is wind sensitivity for the fixed wings.

Ben Weinstein (benweinstein2010@gmail.com)
2021-01-18 15:26:18

*Thread Reply:* My tech is convinced that wingtra drones will be too shaky in our typical 15mph wind.

Ben Weinstein (benweinstein2010@gmail.com)
2021-01-18 15:26:26

*Thread Reply:* Everglades is basically windy 100 days a year

Ben Weinstein (benweinstein2010@gmail.com)
2021-01-18 15:26:59

*Thread Reply:* so i contact https://www.atmosuav.com/ and they advertised a heavier drone for windier conditions.

atmosuav.com
Ben Weinstein (benweinstein2010@gmail.com)
2021-01-18 15:27:06

*Thread Reply:* but its more like 25,000$

Ben Weinstein (benweinstein2010@gmail.com)
2021-01-18 15:27:17

*Thread Reply:* which is probably just outside our price range.

John Payne (drjohnpayne@gmail.com)
2021-01-18 15:46:27

*Thread Reply:* Yes I think there are two issues: 1) tipping side-to-side makes the camera take photos that aren’t pointed straight down, which could violate your transect assumptions and/or add distortion at the edges of the images, and 2) shaking can cause blurring. Gyroscopes can help with blurring (and tipping to some degree), and gimbals can help with the tipping issue. But both also increase payload. I would think that the lighter weight the camera, the lighter the gyro/gimbal needed to control it, so it might be worth looking at integrated camera/gyro/gimbal systems that have been designed with light weight in mind. UAVFactory makes some (https://uavfactory.com/en/stabilized-payloads), but i haven’t checked them out and I don’t know what the prices are. (Howard, I notice they are ITAR-free). In general, I would think that a UAV with less lift would be less tippy, so a bullet-like UAV might be more stable than a glider-like UAV. I don’t know whether swept-wing designs like the C-Astral drone I pointed to or Marlyn are more stable than straight-wing designs in general — maybe Howard knows?

uavfactory.com
Howard L Frederick (simbamangu@gmail.com)
2021-01-18 22:15:38

*Thread Reply:* UAVFactory’s responses (I’ve had 3 sets of back-and-forth so far with no straight answers) were that my (fantasy) budget of $100K that I put down on the contact form was “far too little” for the base model.

Howard L Frederick (simbamangu@gmail.com)
2021-01-14 22:27:27

.... US regulatory environment much better I think, but for coverage of large areas definitely you should look into economics of microlight use.

Vienna Saccomanno (v.r.saccomanno@tnc.org)
2021-01-19 14:56:22

@Ben Weinstein my team with The Nature Conservancy uses a Wingtra to survey seabirds on Palmyra Atoll.

Ben Weinstein (benweinstein2010@gmail.com)
2021-01-19 15:39:09

*Thread Reply:* is it sensitive to the wind! can you send any sample images?

Ben Weinstein (benweinstein2010@gmail.com)
2021-01-19 15:39:21

*Thread Reply:* my techs are convinced it won't work in the wind.

Vienna Saccomanno (v.r.saccomanno@tnc.org)
2021-01-19 16:11:32

*Thread Reply:* our tech says that, while Palmyra is a very windy place, the Wingtra handles steady winds up to 35MPH. Imagery is good, but suggest shifting to max shutter speed. Landing in high wind is risky. Very gusty winds will require a larger area for takeoff & landing. In general, his impression is that the Wingtra is much more capable in windy situations than a quadcopter.

I can look into a file share if that is still of interest.

Ben Weinstein (benweinstein2010@gmail.com)
2021-01-19 16:11:48

*Thread Reply:* please!

Ben Weinstein (benweinstein2010@gmail.com)
2021-01-19 16:35:57

*Thread Reply:* If your tech is willing to talk by zoom/phone, I would love to hear.

Vienna Saccomanno (v.r.saccomanno@tnc.org)
2021-01-19 16:47:33

*Thread Reply:* i'll move this to a DM

Sara Beery (sbeery@caltech.edu)
2021-01-21 17:59:56

New journal on "Environmental Data Science"

"OPEN FOR SUBMISSIONS: Environmental Data Science (cambridge.org/eds) is a new, peer-reviewed open access journal dedicated to the potential of artificial intelligence and data science to enhance our understanding of the environment, and to address climate change. Led by Claire Monteleoni (University of Boulder, Colorado) and a team of Editors, and published by Cambridge University Press, the journal is now open for submissions."

🎉 Lily Xu, Olivier Gimenez, Riccardo de Lutio, Ed Miller, Chris Yeh
Dan Morris (agentmorris@gmail.com)
2021-01-21 19:19:55

New dataset on lila.science, courtesy of the Wild Nature Institute (and lots of helpful volunteers at Zooniverse):

http://lila.science/datasets/wni-giraffes

This dataset contains keypoints for giraffe photogrammetry, so automating this is a potentially interesting ML problem that steps slightly outside the usual detection/classification paradigms, and WNI already makes extensive use of ML tools to accelerate their workflows, so if a student is looking for a nice self-contained ML project that could be used right away by a conservation organization, definitely take a look at this dataset!

Background on WNI's previous work integrating an AI pipeline for giraffe cropping and individual ID:

https://www.sciencedirect.com/science/article/abs/pii/S1574954118300426?via%3Dihub

LILA BC
Written by
lilawp
Est. reading time
2 minutes
sciencedirect.com
:giraffe_face: Stefan Schneider, Sara Beery, Oisin Mac Aodha, Carly Batist, Ming Zhong, Mikey Tabak, Siyu Yang
👀 Alex Zhuang
Sara Beery (sbeery@caltech.edu)
2021-01-27 16:00:30

This year's FGVC Workshop just released its call for papers!! 4 page short submissions, deadline is April 2nd.

"The purpose of this workshop is to bring together researchers to explore visual recognition across the continuum between basic level categorization (object recognition) and identification of individuals within a category population. Topics of interest include:

Fine-grained categorization • Novel datasets and data collection strategies for fine-grained categorization • Appropriate error metrics for fine-grained categorization • Low/few shot learning • Self-supervised learning • Semi-supervised learning • Transfer-learning from known to novel subcategories • Attribute and part based approaches • Taxonomic predictions • Addressing long-tailed distributions Human-in-the-loop • Fine-grained categorization with humans in the loop • Embedding human experts’ knowledge into computational models • Machine teaching • Interpretable fine-grained models Multi-modal learning • Using audio and video data • Using geographical priors • Learning shape Fine-grained applications • Product recognition • Animal biometrics and camera traps • Museum collections • Agricultural • Medical • Fashion" https://sites.google.com/corp/view/fgvc8

🐦 Oisin Mac Aodha, Riccardo de Lutio
👍 Oisin Mac Aodha, Armin Bazarjani, gvanhorn, Mikey Tabak
👍:skin_tone_4: Ixchel Meza
Ana Usenko (usenkoa@wwu.edu)
2021-02-01 19:23:34

Hi everyone! I'm new to this group but very excited to be here and wanted to introduce myself! I'm a recent graduate from Western Washington University with BS & BA degrees in computer science and linguistics. I'm not particularly new to AI and have research experience through the university and the Pacific Northwest National Lab in deep learning and natural language processing. The work I've been involved in includes fewshot learning, audio classification, and authorship identification. Although I've not been directly involved in Conservation-focused AI pursuits, I am really inspired by the work @Gracie Ermi and others have done at Vulcan for wildlife conservation. Over the past year or so, I've become more interested in the topics of AI and ethics and although conservation isn't directly related to that, I'm always interested in seeing how others are using their skills in AI to improve the world around them and solve important, real-word, ethical problems. Looking forward to getting a peak into what you all are working on here! Additionally, I'm currently searching for a full-time position as a recent grad and trying to break through into the field of AI. If anyone knows of any openings in ML/AI/NLP, Data Science, or even software development, please reach out to me! Would absolutely appreciate it, and hope to have the chance to contribute to this group more in the future.

👋 Lily Xu, Howard L Frederick, Ed Miller
Lily Xu (lily_xu@g.harvard.edu)
2021-02-02 13:18:51

Very excited that @Heather Lynch will be giving a talk at Harvard CRCS on Monday on penguins, satellite imagery, and AI! How many penguins are there? (And other mysteries solved by satellites and AI Monday, February 8, 2021 11 AM EST (UTC-5)   Sign up here: https://crcs.seas.harvard.edu/ai-social-impact

> Satellite imagery and computer vision are two transformational technologies that have rapidly, and quite radically, expanded our capacity to study wildlife in the world’s most remote places. In this talk, I will describe my lab’s efforts to combine satellite imagery, drones, and other remote sensing technologies with good old fashioned field work to study the distribution and abundance of penguins and other wildlife in Antarctica. I’ll also discuss the threats facing Antarctic penguins and how scientists are bringing together new technology, artificial intelligence, and advanced predictive modelling to help guide policymakers in their work to protect one of the world’s last remaining wildernesses.

👍 Benjamin Kellenberger, Sara Beery, Talia Speaker, Armin Bazarjani, Casey Youngflesh, Ben Weinstein, Carly Batist, Wethington Michael, Frederic, Ana Usenko, Mikey Tabak
❤️ Sara Beery
Silvia Zuffi (silvia@mi.imati.cnr.it)
2021-02-04 02:12:14

https://www.cv4animals.com/

cv4animals.com
👍 gvanhorn, Sara Beery, Hemal Naik
Declan (declan.pizzino@consbio.org)
2021-02-05 16:37:14

Hi all, my name is Declan Pizzino and I'm a geospatial analyst with the Conservation Biology Institute. Stoked to share and see how other folks are applying AI to the field of conservation. Many thanks to @Björn Lütjens for the invite!

👋 Sara Beery, gvanhorn, Carly Batist, Lily Xu, Alex Borowicz, Petar Gyurov, Björn Lütjens, Océane, Mikey Tabak
👏 Sarra Alqahtani
Gyri Reiersen (gyri.reiersen@tum.de)
2021-02-06 10:10:38

Hey guys! 👋 Gyri here! Originally Norwegian, growing up on a farm between mountain and fjords, and currently a master student in AI in Munich, Germany. Really excited to be writing my thesis with @Björn Lütjens and @David on the topic of forest monitoring!

🌳🌲🌴We want to put together an open source “awesome-forest-datasets” github repo. • Do you know if such a collection already exists? • What forrest datasets/collections do you know of? (incl. satellite, drone, airborne, field data, etc)

🌍 Björn Lütjens, Ben Weinstein, Sara Beery, Lily Xu, David, Petar Gyurov, Mikey Tabak, Lukas Liebel
Ben Weinstein (benweinstein2010@gmail.com)
2021-02-06 10:49:24

*Thread Reply:* I can help here too. I know of most of the datasets, ping me if I don't update here during the week. https://github.com/weecology/NeonTreeEvaluation https://www.newfor.net/download-newfor-single-tree-detection-benchmark-dataset/

Stars
<p>37</p>
Language
<p>Python</p>
🙌 Gyri Reiersen, Sara Beery, David, Björn Lütjens
Ben Weinstein (benweinstein2010@gmail.com)
2021-02-06 11:00:46

*Thread Reply:* https://www.planet.com/nicfi/

Planet
🙌 Gyri Reiersen, Sara Beery, David, Björn Lütjens
🌴 Daniel Grzenda, David
Sara Beery (sbeery@caltech.edu)
2021-02-06 11:04:39

*Thread Reply:* I'm currently curating a bunch of urban forest data, happy to add once it's ok'd for release!

🦅 Björn Lütjens, David
Daniel Grzenda (grzenda@uchicago.edu)
2021-02-06 14:30:06

*Thread Reply:* I'm currently working on a project to use the NICFI dataset to monitor land use in the tropics. We're hoping to use transfer learning and looking for a model trained on 5 layer surface reflectance images (best resolution in the NICFI dataset). I'd be happy to share what we've found so far/bounce ideas off each other if you end up doing something similar.

Ben Weinstein (benweinstein2010@gmail.com)
2021-02-06 16:26:40

*Thread Reply:* @Daniel Grzenda love to hear more, what measure are you trying to predict. I have access to the data, but don't see a massive use case yet.

Daniel Grzenda (grzenda@uchicago.edu)
2021-02-08 10:17:09

*Thread Reply:* We're working on labeling a subset of the quads with segmentation masks of different industries (mines, mills, plantations, etc) and the plan is to use dice as our metric

Mikey Tabak (tabakma@gmail.com)
2021-02-10 09:16:31

@Sara Beery I have a question about MegaDetector. Why did you choose to use Faster-RCNN instead of a YOLO or SSD model? Did you find that it was more accurate? I couldn't find details on model selection in your paper, but sorry if you've already described this somewhere else. I ask because I'm working on similar problems (except that I'm trying to incorporate more classes) and finding that YOLO and SSD outperform Faster-RCNN on my data, so I'm wondering if there is something I'm missing (which is usually the case). Thank you!

Hemal Naik (hnaik@ab.mpg.de)
2021-02-10 10:57:03

*Thread Reply:* Great question, I am also interested. I recently found yolo3 worked quite well in comparison with faster-rcnn type architecture.

👍 Mikey Tabak, Sara Beery
Sara Beery (sbeery@caltech.edu)
2021-02-10 12:28:50

*Thread Reply:* The reason was that I wanted to work with the object centric features provided by two stage object proposal then object classification structure of Faster R-CNN. I think there's definitely a lot of interest in a lighter-weight version though! Adapting it in an object-centric way would mean thinking about how to appropriately extract object specific features to compare to, and where in the network you would want to place the attention block. I think there's even possibly a classification version of this where you add attention across entire frames before the final network layers.

👍 Mikey Tabak
Mikey Tabak (tabakma@gmail.com)
2021-02-10 13:47:39

*Thread Reply:* Great. Thanks for explaining this Sara!

Sara Beery (sbeery@caltech.edu)
2021-02-12 14:19:35

*Thread Reply:* Haha omg, I'm sorry, I just realized I answered a question you didn't ask! I just answered "Why did you start from Faster R-CNN for Context R-CNN"

Sara Beery (sbeery@caltech.edu)
2021-02-12 14:24:36

*Thread Reply:* The answer to your actual question about the MegaDetector was that on the data we were evaluating on, at the time we trained the model, Faster R-CNN was the highest performing architecture (and did outperform YOLO). I think possibly because we're training on lots of data from different parts of the world the higher capacity model was able to capture more information. However, there have been a couple big advancements in object detection since then (2018!) and @Dan Morris and I were just talking a couple weeks ago about doing a new architecture comparison for MegaDetector V5, and including some of the newer, high-performing models such as RetinaNet, CenterNet, EfficientDet, etc. as well as re-evaluating on SSD/YOLO.

👍 Petar Gyurov
Olga Khroustaleva (okhroust@gmail.com)
2021-02-15 03:59:21

Hi everyone! New here, wanted to introduce myself. I am Olga, a long-time Googler with background in user experience. In addition to my Google job, I am also a Master's student in the Sustainable Resource Management program at TU Munich, where I plan to focus on wildlife preservation. Thank you @Sara Beery for inviting me to this community!

🌍 Omiros Pantazis, Lily Xu, Elijah Cole (Deactivated), Sara Beery, gvanhorn, David, Yumna, Megan Cromp, Gyri Reiersen, Björn Lütjens
Ben Weinstein (benweinstein2010@gmail.com)
2021-02-15 12:53:14

*Thread Reply:* Hi Olga! We all definitely need more help in front end development. Welcome.

Sara Beery (sbeery@caltech.edu)
2021-02-16 18:19:25

Can anyone point me to work in NLP for conservation applications? Specifically anything on extracting scientific data from historic records/papers?

Ben Weinstein (benweinstein2010@gmail.com)
2021-02-16 19:26:37

*Thread Reply:* They must be doing some NLP here, https://www.zooniverse.org/organizations/md68135/notes-from-nature I know there is transcription.

zooniverse.org
👍 Sara Beery
Ritwik (rittyun@yahoo.com)
2021-02-17 06:45:59

*Thread Reply:* working on a similar thing now.. paper under review with minor revisions.. will post here soon 🙂

Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2021-02-16 19:27:43

Our use cases are limited for our intelligent agent:

  1. Machine translation from multiple languages to English then
  2. Binary classifier: do the tags+title+description (standardized to English) describe a video of a whale shark in the wild or something else (non-data)
  3. Date prediction with CoreNLP
👍 Sara Beery, Ankita Shukla
Sara Beery (sbeery@caltech.edu)
2021-02-18 19:58:11

Anyone know of any public datasets collected in the wild (not in a lab) that have behavior labels per-animal? Asking for a friend 🙂

Sara Beery (sbeery@caltech.edu)
2021-02-18 20:05:28

*Thread Reply:* Snapshot serengeti has kinda messy crowdsourced sequence-level behavior labels, but they aren't matched to individuals

Sara Beery (sbeery@caltech.edu)
2021-02-18 20:06:59

*Thread Reply:* I've seen a work recently on tracking and/or estimating pose of individual chimps (https://gdude.de/densepose-evolution/, https://advances.sciencemag.org/content/5/9/eaaw0736?utmcampaign=The%20Batch&utmsource=hsemail&utmmedium=email&hsenc=p2ANqtz-8CwLclbKbf40u8wWvIUWHI963s-JBWbyXpTKSgk7bmicYo17It7L7cubxMB7g0nXbin1fw|https://advances.sciencemag.org/content/5/9/eaaw0736?utmcampaign=The%20Batch&utm_source[…]8wWvIUWHI963s-JBWbyXpTKSgk7bmicYo17It7L7cubxMB7g0nXbin1fw)

gdude.de
Science Advances
Sara Beery (sbeery@caltech.edu)
2021-02-18 20:07:45

*Thread Reply:* But I'm not sure if those datasets are 1) public, 2) have behavior labels

Ben Weinstein (benweinstein2010@gmail.com)
2021-02-18 20:14:00

*Thread Reply:* https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0158748

journals.plos.org
🙌 Sara Beery
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2021-02-23 13:08:58

Hey all. We have struggled at Wild Me with finding small batch data annotation outsourcing options, either through volunteers, paid local contractors, or independent firms. Local contractors and volunteers have been problematic, rarely completing one project or lasting more than two. Given that our average batch size is 2000-2500 photos needing annotations, we're not at the volume or price point that a big vendor (iMerit, Samasource, etc.) is interested. We recently started working with a past collaborator (Dr. Richard Lamprey) and his Uganda-based aerial survey analytics team (https://www.wildspace-image-analytics.com/). We have completed three projects with them and are wrapping up a fourth. Their speed, quality and attention to detail has been impressive (the team is composed of experienced aerial survey reviewers), and the price point is good too. I highly recommend them if you are looking for an outsourcing option.

Full disclosure: I am a past collaborator with Dr. Richard Lamprey. I have no financial interest in Wildspace.

Wildspace-Image-Analytics
👍 Vienna Saccomanno, Sara Beery, Jon Van Oast, Carly Batist, Ben Weinstein, Ed Miller, Lily Xu, Armin Bazarjani, David
Carly Batist (cbatist@gradcenter.cuny.edu)
2021-02-24 09:20:34

Hi all! Just wanted to give a heads-up that WILDLABS is running their annual week-long #tech4wildlife photo challenge starting today!! Check it out on Twitter: https://twitter.com/WILDLABSNET/status/1364521244620644352?s=20

❤️ Talia Speaker, Sara Beery, Rob Sinclair, Océane
Carly Batist (cbatist@gradcenter.cuny.edu)
2021-02-24 09:21:51

More info here as well - https://www.wildlabs.net/tech4wildlife

WILDLABS.NET
Petar Gyurov (pgyurov93@gmail.com)
2021-02-26 05:44:32

Has anyone here made production-ready Tensorflow apps that utilise the GPU? I want to simplify the process of enabling GPU support for non-tech users; I was thinking I could create a Docker image that has all the CUDA libraries, drivers, etc. installed but even that looks like it might not be straight-forward for the end user.

Frederic (frederic@apic.ai)
2021-02-26 07:54:46

*Thread Reply:* I build production ready TF apps, but not for the deployment at "consumers/non tech people". I have control about the devices that TF GPU and CPU containers get deployed to or can define requirements about their environment.

To my knowledge the gpu driver, cuda, cudnn version hassle is still present in nvidia-docker. Specially when using new GPUs like the 3060, ... compatibility with every prebuild gpu TF containers is a real pain.

But one of the main benefits is, that you have way more control about the software stack with nvidia-docker, but the pain of supporting older and newer gpus is still present.

Further more I am not sure about the EULA, ... when distributing images publicly.

Side note: If you want to use gpus within docker-compose ping be for workarounds, since its not officially supported yet ;)

Frederic (frederic@apic.ai)
2021-02-26 07:58:54

*Thread Reply:* If you decide to go with docker images, i could share some of our internal guides privately, that might help you to set everything up and have some orientations for the README.md.

Petar Gyurov (pgyurov93@gmail.com)
2021-02-26 08:13:41

*Thread Reply:* @Frederic Thanks -- I'm glad you've shared this knowledge with me before I started any Docker development. It looks like there may not be much to gain in my use case.

I've thought about just writing a script that downloads and installs all the requirements but NVIDIA don't make it easy (you need an account to download the cudnn libs 😠 ); and I doubt I can just bundle them into my installer without facing licensing issues. Will ping you if I decided to explore things further, thanks for the offer 👍

Ben Weinstein (benweinstein2010@gmail.com)
2021-02-26 10:00:29

*Thread Reply:* @Petar Gyurov what do you have in mind? I took a meeting with an engineer not so long ago on, on device camera traps, may be relevant here.

Petar Gyurov (pgyurov93@gmail.com)
2021-02-26 10:12:46

*Thread Reply:* @Ben WeinsteinEnabling GPU support in my project, MegaDetector-GUI, requires end users to download all the CUDA drivers, tookits and library patches. I've written a basic How-To guide but I was kind of hoping for a more elegant "automagical" solution that non-tech people can easily do.

Benjamin Hoffman (benjaminsshoffman@gmail.com)
2021-03-01 11:06:19

Hi everyone! I’ve been here for a while but I haven’t introduced myself. I’m a recent math PhD (in differential geometry). Now I’m pivoting to work in the conservation + ai realm. I’ve been collaborating with (and learning from) @gvanhorn at the Cornell lab of ornithology, working on some bird sound id problems. I’m also interested in remote sensing and its applications to conservation problems. Thanks to @Sara Beery for the invite! 🦉

👋 Sara Beery, Ben Weinstein, Oisin Mac Aodha, Declan, Océane, aruna, Omiros Pantazis, Lily Xu, Carly Batist, Daniel Grzenda, Vienna Saccomanno, Ed Miller, Björn Lütjens, Armin Bazarjani, Gyri Reiersen, Lukas Liebel
🐦 gvanhorn, Océane, Riccardo de Lutio, Hannah Yin
👋:skin_tone_4: Ixchel Meza
Ben Weinstein (benweinstein2010@gmail.com)
2021-03-03 10:32:00

*Thread Reply:* What's the latest in bird sound ID? What's new? Are we getting better? What's the main obstacle, annotations? geographic generalization? data quality?

Benjamin Hoffman (benjaminsshoffman@gmail.com)
2021-03-03 11:22:02

*Thread Reply:* My perception (keeping in mind I’ve only been working on this for 6 months) is that there is a lot of room for improvement in getting more annotated data. There exists a ton of high quality recordings (eg at Macauley library), but it’s a bit of an expert task to box and correctly label each bird vocalization.

Benjamin Hoffman (benjaminsshoffman@gmail.com)
2021-03-03 11:22:28

*Thread Reply:* Species recognition in soundscapes remains hard, because there tends to be many overlapping sounds, birds at varying distances from the microphone, and lots of background noise.

Benjamin Hoffman (benjaminsshoffman@gmail.com)
2021-03-03 11:23:02

*Thread Reply:* I think most existing approaches to bird sound recognition use tweaks of off-the-shelf vision models (eg ResNets), and it would be exciting to construct some models which are more custom-built for the domain.

Ben Weinstein (benweinstein2010@gmail.com)
2021-03-03 11:24:41

*Thread Reply:* interesting. so its in the localization of the annotations within samples. I've heard this before. Thanks for your thoughts.

Benjamin Hoffman (benjaminsshoffman@gmail.com)
2021-03-03 11:25:50

*Thread Reply:* no problem, i appreciate the interest!

Sara Beery (sbeery@caltech.edu)
2021-03-03 11:26:41

*Thread Reply:* It's always interesting to see how many common threads there are across domains!!

Sara Beery (sbeery@caltech.edu)
2021-03-04 13:00:13

New python API for GBIF data, focused on pulling data for ML training!!! If you're interested in testing or contributing you can contact gbif-dl@inria.fr

"We released a first version of GBIF-DL (GitHubpypi) a package that makes it simpler to obtain training images  from the GBIF database to be used for training machine learning classification tasks. It wraps the GBIF API and supports directly querying the api to obtain and download a list of urls efficiently (based on asyncio) and with many options (balancing, bounding, filtering, etc.). We are looking for some potential users who would be willing to test it, provide feedbacks and/or maybe contribute to it."

Website
<p><a href="https://plantnet.github.io/gbif-dl/">https://plantnet.github.io/gbif-dl/</a></p>
Stars
<p>1</p>
PyPI
gbif.org
🐍 Stefan Schneider, Ben Weinstein, Declan, Megan Cromp, David
👀 Isaac Griswold-Steiner, Vienna Saccomanno, David
🌴 Riccardo de Lutio
😎 Jon Van Oast
Jon Van Oast (jon@wildme.org)
2021-03-04 15:51:10

*Thread Reply:* this is so cool -- thanks for sharing! being a bit of a gbif fanboy myself, this has me pretty excited.

Ed Miller (ed@hypraptive.com)
2021-03-04 16:06:38

*Thread Reply:* This may come in handy as we start to expand BearID to more species! Is there a way to filter for images that actually show the species in question rather than signs of the species (tracks, scat, etc.)?

Ben Weinstein (benweinstein2010@gmail.com)
2021-03-04 22:16:03

Does anyone in our community have experience mounting cameras to the bottom of manned aircraft for wildlife surveys? My lab at the University of Florida performs manned aircraft surveys of wading bird populations to support everglades restoration by the US Army Corps, as well as endangered species management by Everglades National Park. The project has been going on for a couple decades. We are working to modernize the image processing and analysis tools. We've got a nice object detection model going from unmanned vehicles for small areas and we are working to replace manned observation from our 1957 cessna 182 with an image capture solution. Ideally we would process the data during flight to throw away the vast majority of images that do not have birds present to keep storage more reasonable (?). We have an FAA approved housing to attach to the struts, so i'm looking for people with experience wiring the camera to a GPU in the plane. We have a Nikon D850 (~45 megapixels). Nadir or oblique angle? Wide angle lens? We fly at about 1000ft and 100 knots, but that was based on a human observation system. We survey a huge area (1000+ sq miles). I'm spoiled by all the innovation in an unmanned vehicles that abstracts away alot of these challenges. All thoughts welcome.

🐦 Lily Xu, Daniel Grzenda, Sara Beery, Björn Lütjens
Howard L Frederick (simbamangu@gmail.com)
2021-03-04 23:04:12

*Thread Reply:* @Ben Weinstein yes - have been doing this for a few years with nadir and oblique images from Cessna 182/206.

Howard L Frederick (simbamangu@gmail.com)
2021-03-04 23:05:13

*Thread Reply:* One of the systems is documented here: https://github.com/TZCRC/Lanner-CamPod

Stars
<p>1</p>
Language
<p>Python</p>
😎 Jon Van Oast, Ben Weinstein, Sara Beery
Howard L Frederick (simbamangu@gmail.com)
2021-03-04 23:09:09

*Thread Reply:* You mention “wiring the camera to a GPU” meaning you’d like to do in-flight processing to discard images? I’d suggest that the cost of storage is very cheap but the cost of missing data by poorly-filtered in-flight processing is high, so think about post-processing. We take images at 2 second intervals - Nikon raw @ 30+MB per image over 4 hours = 200 GB per flight; there are 256 and 512GB cards, but the biggest limitation is speed of the SD card (if it isn’t fast enough you get gaps in coverage while the camera catches up with writes).

Have Camera Will Travel - Reviews &amp; Tips
👍 Ben Weinstein, Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2021-03-04 23:47:51

*Thread Reply:* Is someone physically changing out the SD cards as they fill during flight? My vision for the inflight processing is just have a pilot flying. That way we have a TB drive in the plane, but I guess we can just skip the detection mid-flight. Especially due to COVID, we can only have one person in the plane.

Howard L Frederick (simbamangu@gmail.com)
2021-03-05 00:30:20

*Thread Reply:* No, it's a single SD card per flight. We were fine with 256GB until now but upgrading to 512 soon for longer flights. You planning on raw files?

Howard L Frederick (simbamangu@gmail.com)
2021-03-05 00:33:43

*Thread Reply:* The reliability of a Nikon to get a photo on the sd card with every triggering is rock-solid. There are probably ways to record to an external drive or computer but don't lose the onboard reliability.

Ben Weinstein (benweinstein2010@gmail.com)
2021-03-05 11:23:55

*Thread Reply:* Great. I'll take this to the team. Thanks.

Pietro Perona (perona@caltech.edu)
2021-03-07 13:25:12

*Thread Reply:* Hi Ben - are you sure that you need to store RAW files? If you stored high quality JPG you may be able to reduce the total storage just about to where youre fast SD cards will manage for the whole flight and then process once you are on the ground.

Pietro Perona (perona@caltech.edu)
2021-03-07 13:26:30

*Thread Reply:* @Ben Weinstein Would you be willing to share your image dataset with the students in our class? As I understand it you need a good algorithm for detecting and counting the birds, where each bird is assigned a latitude and longitude. You probably want to also estimate the species. Correct?

Pietro Perona (perona@caltech.edu)
2021-03-07 13:27:14

*Thread Reply:* @Ben Weinstein If you are inclined to share (part of) your dataset with us I will be in touch to work out the details.

Vincent Miele CNRS (vincent.miele@univ-lyon1.fr)
2021-03-05 08:29:03

Hi all, I am Vincent Miele from CNRS France. We are leading a nice collaboration in France to evaluate different strategies to deal with camera traps images from different places in the country. The idea is to transfer most of our future knownledge to the community and to help people in the field to use machine learning stuff on their own. Thanxs for all for being so inspiring. And thanks to @Sara Beery for the invitation.

👋 Daniel Grzenda, Sara Beery, Stefan Schneider, Ben Weinstein, Olivier Gimenez, Ed Miller, Declan
Ixchel Meza (ixchel.meza.ch@gmail.com)
2021-03-05 11:40:17

*Thread Reply:* Hi, Vincent. We are developing a platform for capture, management and administration of biodiversity collections at our group in CONABIO and we are using/testing it internally. Our intention is that any research group could use it, but for now it is still in developing phase 😔 maybe later! 👋:skintone4:

🙌 Olivier Gimenez
Pietro Perona (perona@caltech.edu)
2021-03-07 13:34:32

*Thread Reply:* Bonjour Vincent! Do you have camera trap datasets and image analysis challenges that you are willing to share with our class at Caltech?

Pietro Perona (perona@caltech.edu)
2021-03-07 13:36:53

*Thread Reply:* @Vincent Miele CNRS We (@Sara Beery, @Elijah Cole (Deactivated) and Neehar Kondapaneni) are teaching a computer vision class at Caltech and would love to challenge our students with projects that help ecologists and field biologists.

Vincent Miele CNRS (vincent.miele@univ-lyon1.fr)
2021-03-08 03:52:17

*Thread Reply:* Bonjour Pietro, we are discussing with different french partners to get image data, but these are their own private data... Maybe they will share their data at some point in the mid-term, but this is not planned at the moment. In any case, this would be a good idea!

Elizabeth Madin (emadin@hawaii.edu)
2021-03-05 20:51:35

Hello everyone, I recently heard about this space and have to thank @Heather Lynch for pointing me to it. I'm an Assistant Professor focused on marine conservation ecology at the Hawaii Institute of Marine Biology (part of the University of Hawaii). My lab's work focuses on understanding dynamics of ecosystems, primarily (but not exclusively) through landscapes of risk in marine systems. In particular, we're interested in how humans are changing landscapes of risk, and ecosystems, on local to global scales. Of relevance to this group is our current NSF project looking (in part) at developing an ML algorithm to classify and measure features of coral reefs called "reef halos" (aka "grazing halos"). We also do quite a bit of underwear 'camera trap' work looking at fish and invert behavior. My lab's website is https://www.oceansphere.org. Looking forward to learning more about what others are doing in this space!

👋 Sara Beery, Declan, gvanhorn, Lily Xu, Océane, aruna
Ben Weinstein (benweinstein2010@gmail.com)
2021-03-05 21:20:56

*Thread Reply:* what does underwater camera trapping look like? Can you post a photo? My old tool for video processing had some object detection that seem to get alot of downloads from the underwater ecology community. We are redesigning those tools, but don't have a well annotated underwater dataset.

Howard L Frederick (simbamangu@gmail.com)
2021-03-06 08:14:36

*Thread Reply:* @Elizabeth Madin welcome … and do you have any writeups on underwater camera traps? Am trying to figure out how to make some for cichlid monitoring in freshwater and am struggling!

Océane (boulaisoceane@gmail.com)
2021-03-06 14:54:04

*Thread Reply:* thirding @Ben Weinstein and @Howard L Frederick here - I would love to see you underwater camera set up! I’ve been building species ID models with the collected habcam data along the NE coast of the Gulf of Mexico that look like this and I’d love to know if folks have come up with resilient cages for their cameras that don’t have bars…

Océane (boulaisoceane@gmail.com)
2021-03-06 14:57:45

*Thread Reply:* This is the preliminary results of species ID detection (trained with mmlab’s toolbox & resnet50 backbone), and I bet consistent accuracy would increase without these bars…

👍 Sara Beery, aruna
aruna (arunas@mit.edu)
2021-03-07 13:52:56

*Thread Reply:* That looks so cool, @Océane! Thanks for sharing. ♥️

❤️ Océane
Howard L Frederick (simbamangu@gmail.com)
2021-03-08 01:11:50

*Thread Reply:* @Océane that is a very solid looking camera, do you have any public build guides for how you did it?

Elizabeth Madin (emadin@hawaii.edu)
2021-03-08 14:24:24

*Thread Reply:* @Ben Weinstein and @Howard L Frederick: this paper describes in some detail what we did recently with our underwater (coral reef) camera trap arrays. I call them camera traps - it's actually continuous video that we manually annotated (ugh!). We have a fairly large dataset (~3000 annotated individuals, mostly fish + some inverts) from Australia's Great Barrier Reef that could be put towards developing an algorithm if people are looking for a challenge like that! We're collecting and annotating more video now from Hawaii (some species overlap with Australia, but not complete) and would absolutely love to apply an algorithm like what you've posted, @Océane! I'll also post a photo from our earlier rounds (using now-old GoPros) - we don't bother with cages because we've never had a problem with cameras being dislodged, attacked, etc., We're working in fairly wave-limited coral reef lagoon environments.

Elizabeth Madin (emadin@hawaii.edu)
2021-03-08 14:26:49

*Thread Reply:*

🙌 Howard L Frederick, Océane
Elizabeth Madin (emadin@hawaii.edu)
2021-03-08 14:31:47

*Thread Reply:*

❤️ aruna, Howard L Frederick, Océane
Océane (boulaisoceane@gmail.com)
2021-03-08 18:25:09

*Thread Reply:* That little camera is so cute! Is it mounted directly on the coral/rock?

Elizabeth Madin (emadin@hawaii.edu)
2021-03-08 19:29:16

*Thread Reply:* @Océane: It's just temporarily cable-tied to dead coral. Easy on, easy off with no damage to anything. 🙂

❤️ Océane
Howard L Frederick (simbamangu@gmail.com)
2021-03-09 01:17:58

*Thread Reply:* @Elizabeth Madin interesting paper, it’s amazing to be able to quantify feeding behaviour like that. I notice 7 cameras malfunctioned (out of 30?) - what were the problems there?

Elizabeth Madin (emadin@hawaii.edu)
2021-03-09 11:39:15

*Thread Reply:* @Howard L Frederick: Ah, yes...we had signifiant camera failure because we had to modify the cameras slightly by using a larger waterproof housing case to accommodate the extended-life batteries we needed to get us through dusk and into the night. (We had to set out the cameras and get the boat back to the research station prior to dark.) A number of the modified housings failed overnight, despite having passed our (shorter-term) tests at depth prior to deployment. It was indeed pretty frustrating, but thankfully we still got enough data from the remaining cameras to get to the answer!

Ben Weinstein (benweinstein2010@gmail.com)
2021-04-14 17:39:57

*Thread Reply:* @Océane, @Elizabeth Madin, from ecolog if you didn't see it Hello, my name is Austin Greene and I am a PhD student at the University of Hawaii at Manoa studying the ecology of coral reefs. I recently developed a low-cost camera system (KiloCam) that I hope will make habitat monitoring more accessible, and I am looking for other researchers to help me field test it. KiloCam is small, about 40x25x25mm, and fits inside of cheap GoPro housings to be made waterproof. The instrument is capable of taking 2 MP photos at user-specified intervals, can run off of many power sources, and is highly efficient for long deployment times. For example, on a set of two AA batteries KiloCam should operate for over a week at one photo per minute, or nearly a year with one photo per hour. While the camera isn't particularly high-resolution and is best suited for environments with ample ambient light, it is adequate for habitat monitoring. My hope is that the low cost of building KiloCam (~$30 USD) will make mass deployments possible, or allow improved habitat monitoring in underserved areas.

🙌 Océane
Pietro Perona (perona@caltech.edu)
2021-03-07 00:49:58

Dear all, @Sara Beery, @Elijah Cole (Deactivated) , @neehar Kondapaneni and I will be teaching a computer Vision class at caltech with focus on quantitative ecology. The students will be carrying out projects which, we hope, will push the envelope of what is possible. Please let us know about your datasets and image analysis needs!

❤️ Sara Beery, Jason Holmberg (Wild Me), Omiros Pantazis, aruna, Carly Batist, Elijah Cole (Deactivated), Subhransu Maji, Océane, David, Suzanne Stathatos, Gyri Reiersen, Armin Bazarjani
aruna (arunas@mit.edu)
2021-03-07 10:48:22

*Thread Reply:* Hello! We have an earth observation dataset at MIT, of satellite images from Sentinel. It's about 100k pairs of images from the arctic in the winter and the summer. We are currently fixing how diverse the dataset is, so it could get smaller. But, please let us know if you want to use it for the class.

Elijah Cole (Deactivated) (ecole@caltech.edu)
2021-03-07 12:22:34

*Thread Reply:* @aruna Sounds like it could be a good fit! What’s the task / label set like?

aruna (arunas@mit.edu)
2021-03-07 12:23:34

*Thread Reply:* We have winter tiles, (pre-melt), summer tiles (post-melt), and segmentation masks that identify ice and non-ice regions.

Sara Beery (sbeery@caltech.edu)
2021-03-07 13:38:24

*Thread Reply:* @aruna that sounds like a great dataset. Is it public? Or if not, would you be able to share it?

aruna (arunas@mit.edu)
2021-03-07 13:51:35

*Thread Reply:* Yes, I can share it with you. We are currently working on paring it down a little to have more diversity in the tiles, but I can send you the details once it is published.

aruna (arunas@mit.edu)
2021-03-07 13:51:54

*Thread Reply:* Can you give me a date I should aim for to get the dataset to you?

Sara Beery (sbeery@caltech.edu)
2021-03-07 13:52:20

*Thread Reply:* The class will begin at the end of the month!

aruna (arunas@mit.edu)
2021-03-07 13:53:25

*Thread Reply:* Sounds great. Will send it to you folks soon.

Ben Weinstein (benweinstein2010@gmail.com)
2021-03-07 16:35:21

*Thread Reply:* Just sharing a few things that might be fun starting with the same data. I can expand on any of these ideas.

Ben Weinstein (benweinstein2010@gmail.com)
2021-03-07 16:35:25

*Thread Reply:* 1. Object detection from multiple sensors with uncertainty in spatial alignment. https://github.com/weecology/NeonTreeEvaluation

❤️ Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2021-03-07 16:36:15

*Thread Reply:* 2. Fusing 2D sensor and 3D point clouds in object detection. https://github.com/weecology/NeonTreeEvaluation

❤️ Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2021-03-07 16:38:08

*Thread Reply:* 3. Geographic generalization in fine-grained species classification https://github.com/Weecology/DeepTreeAttention

Stars
<p>26</p>
Language
<p>Python</p>
❤️ Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2021-03-07 16:39:23

*Thread Reply:* 4. 3D CNN versus stacked 2D CNN versus apriori feature engineering for hyperspectral deep learning (369 bands). https://github.com/Weecology/DeepTreeAttention

❤️ Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2021-03-07 16:40:33

*Thread Reply:* 5. Long tailed species classification and unknown class identification. https://github.com/Weecology/DeepTreeAttention

❤️ Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2021-03-07 16:42:54

*Thread Reply:* 6. Miniaturization of object detection networks for in-flight screening of remote sensing imagery during wildlife surveys http://tree.westus.cloudapp.azure.com/everglades/

❤️ Sara Beery
Sara Beery (sbeery@caltech.edu)
2021-03-07 16:45:21

*Thread Reply:* Hey @Ben Weinstein if we get a student team or two interested in these would you be able to help mentor them?

Ben Weinstein (benweinstein2010@gmail.com)
2021-03-07 16:46:02

*Thread Reply:* sure, I can't promise a ton of time, but alittle

👍 Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2021-03-07 16:47:33

*Thread Reply:* Then there is the hummingbird data. Key frame detection and tracking in low-frame rate videos. https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.13011

Ben Weinstein (benweinstein2010@gmail.com)
2021-03-07 16:49:15

*Thread Reply:* Some of these are more ready for students then the others. #6 and #2 are pretty plug and play.

❤️ Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2021-03-07 16:51:12

*Thread Reply:* also we ran a competition last year that covers the general (easier) tree species classification and bounding box detection. The docs and data are all there.

Ben Weinstein (benweinstein2010@gmail.com)
2021-03-07 16:51:14
Ben Weinstein (benweinstein2010@gmail.com)
2021-03-07 16:51:45
Pietro Perona (perona@caltech.edu)
2021-03-07 20:33:47

*Thread Reply:* @Ben Weinstein Ben - thank you very much for all the suggestions!!

Caleb Robinson (calebrob6@gmail.com)
2021-03-08 15:36:43

*Thread Reply:* We are currently running a competition on high-resolution land cover change detection from weak low-resolution labels (https://www.grss-ieee.org/community/technical-committees/2021-ieee-grss-data-fusion-contest-track-msd/). The competition will be over next month (it is currently in the closed test phase), but the evaluation server/data will remain up.

👍 Sara Beery, Pietro Perona
Caleb Robinson (calebrob6@gmail.com)
2021-03-08 15:38:01

*Thread Reply:* There is an interesting super-resolution component from the CV side (and of course identifying land cover change over time has strong ecological implications!)

Pietro Perona (perona@caltech.edu)
2021-03-09 12:35:49

*Thread Reply:* Many thanks to @Caleb Robinson @Ben Weinstein @aruna

Armin Bazarjani (bazarjan@usc.edu)
2021-03-09 18:15:36

*Thread Reply:* It would be great if you could also record and/or post the lectures somewhere! I would love to watch them, and I'm sure others would too.

➕ aruna, Sara Beery, Ștefan Istrate
Sara Beery (sbeery@caltech.edu)
2021-03-09 18:16:36

*Thread Reply:* We're planning on hosting it all publicly 🙂

🙌 Armin Bazarjani, aruna
Subhransu Maji (smaji@cs.umass.edu)
2021-03-10 13:24:49

*Thread Reply:* A bunch of us have been working on detecting tree swallow roosts from RADAR data. We are planning to release a standardized benchmark around this. Would be happy to share it. What’s the timeline for the course? @Dan Sheldon and I would could also do some lightweight mentoring if there’s interest on working on this. More info: https://people.cs.umass.edu/~zezhoucheng/roosts/

😍 Sara Beery
Sara Beery (sbeery@caltech.edu)
2021-03-10 13:34:46

*Thread Reply:* The class will be starting end of this month, so the timeline is a bit tight, but I think that would be an amazing application to bring in and we would be incredibly happy to have you act as high-level mentors. Let us know if that timeline might be possible?

Sara Beery (sbeery@caltech.edu)
2021-03-10 13:37:09

*Thread Reply:* Another option might be to bring in an UG team to help with testing and publishing the benchmark as part of their project? Different members could run different baselines and provide analysis of efficacy

Subhransu Maji (smaji@cs.umass.edu)
2021-03-10 13:39:43

*Thread Reply:* Awesome! I think it’s doable --- we are already working on this and the class deadline is a good motivation 🙂

👍 Sara Beery
Pietro Perona (perona@caltech.edu)
2021-03-07 00:53:33

And if you wish to be involved in mentoring student teams please let us know.

Stefan Schneider (sschne01@uoguelph.ca)
2021-03-08 10:24:09

Hi Everyone! For those that don't know me, I'm Stefan Schneider! A post-doc at the University of Guelph, Canada focusing on AI for conservation and animal re-ID. I'm looking for a satellite imagery dataset to train a whale localization model to prevent shipping and fishing boats from coming within the detected whale proximity. Does anyone know of one of a public / upon request dataset? I've reached out to the author here. Cheers! 🐳

🐋 Sara Beery, Caleb Robinson
aruna (arunas@mit.edu)
2021-03-08 10:30:32

*Thread Reply:* 👋 @Stefan Schneider! Have you looked at requesting tiles from Pleiades, offered by the ESA?

aruna (arunas@mit.edu)
2021-03-08 10:31:22

*Thread Reply:* It's free for research, and you can get all areas covered by sea. Please take a look at their area coverage, since I am not entirely sure if they photograph land and water bodies both. It has excellent resolution at 50 cm/px

Ben Weinstein (benweinstein2010@gmail.com)
2021-03-08 11:21:02

*Thread Reply:* @Stefan Schneider, have you met @Patrick Gray, who did this work while working for our old boss Ari Friedlaender. I can put you in touch with Ari as well. Also check with the NOAA authors who have this data. I also know of a talk by people at Vulcan Inc, interested in this as well. Being in canada, you probably talked to https://www.whaleseeker.com/about-us. It feels like someone needs to organize all these folks.

Whale Seeker
Alex Borowicz (alex.borowicz@stonybrook.edu)
2021-03-08 11:47:40

*Thread Reply:* Hi @Stefan Schneider! I don't have any satellite imagery I can share - Maxar is typically protective of their licenses - but we trained our satellite whale model primarily using down-sampled aerial imagery, which is available here: https://github.com/lynch-lab/Borowicz_etal_Spacewhale

Language
<p>Python</p>
Last updated
<p>a year ago</p>
Alex Borowicz (alex.borowicz@stonybrook.edu)
2021-03-08 11:49:07

*Thread Reply:* If I remember right, Guirado et al. used Google Earth images - doable, but you lose a lot of control because of Google's pansharpening algorithm. They're also not full resolution and it's not super clear how much that matters if you want to feed in full-res imagery

Stefan Schneider (sschne01@uoguelph.ca)
2021-03-08 12:09:05

*Thread Reply:* Hi Aruna. Thanks for this! I'll definitely take a look through the ESA tiles from Pleiadas

Stefan Schneider (sschne01@uoguelph.ca)
2021-03-08 12:13:36

*Thread Reply:* @Ben Weinstein It would be amazing if you could put me in touch with Patrick Grey & Ari Friedlaender! Which NOAA authors are you referring to (sorry if I missed something)? Vulcan Inc sounds like a great contact. Same with this Montreal group that I'll definitely reach out to. Thanks so much!

Ben Weinstein (benweinstein2010@gmail.com)
2021-03-08 12:14:20

*Thread Reply:* @Patrick Gray should be wandering around somewhere. If doesn't jump in, i'll email him.

Stefan Schneider (sschne01@uoguelph.ca)
2021-03-08 12:18:13

*Thread Reply:* @Alex Borowicz This is great! I'll definitely take a look at the repo/data and let you know if I have any questions. Cheers!

Ben Weinstein (benweinstein2010@gmail.com)
2021-03-09 16:01:11

*Thread Reply:* @Vienna Saccomanno.

Vienna Saccomanno (v.r.saccomanno@tnc.org)
2021-03-09 16:14:36

*Thread Reply:* Hi all - The Nature Conservancy is scoping the potential for use of VHR satellite imagery as a tool to survey large whales to meet conservation monitoring objectives in California, so it's great to see mutual interest in this topic. @Stefan Schneider we'd love to hear more about you work in Canada and see if there are ways to collaborate.

Patrick Gray (patrick.c.gray@duke.edu)
2021-03-10 09:27:19

*Thread Reply:* Hey @Stefan Schneider! I know some of your papers pretty well, good to hear from you. My advisor and some collaborators are still working on this though I've moved on mostly. They're taking a large archive we have of drone imagery of whales and downsampling to satellite imagery resolution and using that to train a model for right whales. Working with some folks from BAS and Canada's DFO. When I was doing this I was using all WorldView data and it was working alright but a huge effort to find suitable training data as you likely well know. I ended up largely just searching in common bays and breeding grounds to build up an initial dataset and then iteratively applied the model to find more. Happy to answer any specifics but again I'm a little out of date now on this work!

Stefan Schneider (sschne01@uoguelph.ca)
2021-03-10 09:36:21

*Thread Reply:* @Vienna Saccomanno Great to make the connection! It's early days that are just ramping up, but I'm excited to connect with others and hopefully collaborate. Right now it's the pursuit of initial datasets to build a base line of what we can expect from an object detector.

Stefan Schneider (sschne01@uoguelph.ca)
2021-03-10 09:40:46

*Thread Reply:* @Patrick Gray Thanks for reaching out! It's great to chat with someone who has tackled this problem. Down sampling from drone imagery sounds like a nice solution. Most definitely with the suitable training data. I'm more or less surveying right now trying to find any sort of public dataset. Would you happen to have a link the WorldView data you were using? Or anything that may act as a thread to pull? Cheers!

Patrick Gray (patrick.c.gray@duke.edu)
2021-03-10 09:43:33

*Thread Reply:* Hey @Stefan Schneider unfortunately this was all from access through a NSF polar program grant (via the Polar Geospatial Center) and we can't share any of the data. I've also since lost access myself so can't pull any more. Maxar doesn't seem as interested in this problem as they did in their Digital Globe incarnation.

Stefan Schneider (sschne01@uoguelph.ca)
2021-03-10 11:21:49

*Thread Reply:* @Patrick Gray no worries. This seems to be the way it goes for valuable datasets. I'm sure we all wish we could bring all this data together to train a universal performant model

aruna (arunas@mit.edu)
2021-03-08 16:18:35

Are there folks who have used https://github.com/knjcode/imgdupes?

Stars
<p>132</p>
Language
<p>Python</p>
aruna (arunas@mit.edu)
2021-03-08 16:19:05

I am curious if there's a way to measure the similarity

aruna (arunas@mit.edu)
2021-03-08 16:19:19

For eg. delete if 60% similar but keep if 59.2% similar.

aruna (arunas@mit.edu)
2021-03-08 18:26:10

*Thread Reply:* Ah, it's related to the hamming distance. 🙂 All good here.

Océane (boulaisoceane@gmail.com)
2021-03-08 18:46:41

Is there an easy way to access the FGVC7 extended abstract submissions from CVPR 2020?

Sara Beery (sbeery@caltech.edu)
2021-03-08 18:47:16

*Thread Reply:* I believe most of them are on arxiv? Is there one you can't find?

Sara Beery (sbeery@caltech.edu)
2021-03-08 18:48:02

*Thread Reply:* It looks like there are direct links here as well: https://sites.google.com/corp/view/fgvc7/program?authuser=0

sites.google.com
🙌 Océane
🙏 Oisin Mac Aodha
Océane (boulaisoceane@gmail.com)
2021-03-08 18:48:50

*Thread Reply:* Exactly what I was after. Thank you!

Ben Weinstein (benweinstein2010@gmail.com)
2021-03-10 12:02:31

Are there remote sensing experts in channel? maybe @Sarra Alqahtani. For our everglades bird detection project. I'm trying to get a sense for what our image targets would look like at different image resolutions. We are considering buying new cameras for long-range flights or even invest in satellite data. I have 1cm drone imagery over known areas. My simple plan was to resample the train and test data to lower resolutions (5, 10, 20, 30, 50cm) and measure our ability to recover any signature of their presence visually and using our existing object detection models. Is there a way to make this more rigorous? Any other factors I should consider? What is the best resampling method to best approximate native collect at that resolution? Mean?

aruna (arunas@mit.edu)
2021-03-10 12:05:22

*Thread Reply:* Hi Ben, I am not an expert in this area, but I am taking a computer vision class this semester, and one approach we spoke of here is to use a gaussian blur + sub-sampling. It's one of the recommended ways to downscale images, and between successive down-sampling, you lose mostly details. Plus up-sampling with Laplacian pyramids. Sorry if you already knew this, was excited about this class from today morning, and jumped in with a response. : )

👍 Ben Weinstein
Sara Beery (sbeery@caltech.edu)
2021-03-10 12:20:31

*Thread Reply:* I remember @Sarra Alqahtani discussing challenges surrounding switching between actual data collected with different resolutions vs subsampling high resolution data in her mine detection work?

Sarra Alqahtani (sarra-alqahtani@utulsa.edu)
2021-03-11 17:50:38

*Thread Reply:* Hi @Ben Weinstein. Sorry for the late response. What we are currently doing is to train our model on a mixed of high resolution images (from drone) and downsampled images (low resolution from satellite) then during testing we only use low resolution images. We use GANs and it is a risky technique since the model may populate unreal details but our point is to improve the accuracy of the object detection so we are not that worried about the accuracy of the resolved images (landscapes with gold mining objects). I would think your goal is the same unless you want to differentiate between birds. I hope this helps.

👍 Ben Weinstein
Ben Weinstein (benweinstein2010@gmail.com)
2021-03-11 17:51:34

*Thread Reply:* have you seen this strategy elsewhere? When you say a mix, do you mean at the same time, or sequentially?

Sarra Alqahtani (sarra-alqahtani@utulsa.edu)
2021-03-11 18:35:55

*Thread Reply:* it is like 80% of the dataset is hr and the rest for the lr. I will find the papers we got this idea from. You could try the sequential and see if it will improve the results just as a form of TL

Ben Weinstein (benweinstein2010@gmail.com)
2021-03-11 18:40:40

*Thread Reply:* awesome. We have alot of use cases for cross resolution work, and never really explored the literature. Thanks.

Grace Hansen (ghansen33@gatech.edu)
2021-03-11 12:08:32

Hi everyone, my name is Grace Hansen and I am working with @Isha Palakurthy at the Georgia Institute of Technology to develop a vaccine dispensing device to prevent the spread of rabies among foxes on our campus. Our goal is to have the device be triggered by the appearance of a fox in a wooded area, so this requires image recognition without a reliable internet connection.

Would anyone have any suggestions/resources regarding an edge computing project such as this? We are also grateful for any advice just in approaching the problem. At the moment, we have a dataset of images including images of foxes, but we are also gathering camera trap images here on campus, though this is unlabeled and still limited. Also, most of the tools we are looking at involving using a CNN, but we are also trying to figure out what preprocessing may be necessary given that we want to deploy this on a remote device. We’re aiming to start off deploying a prototype at a smaller scale for now. Any advice (or further questions) is greatly appreciated!

Thank you @Ben Seleb for the invite to this excellent community!

Elijah Cole (Deactivated) (ecole@caltech.edu)
2021-03-11 12:09:50

*Thread Reply:* @Sam Kelly might have a good sense for the right edge device?

❤️ Sara Beery, Grace Hansen
Sam Kelly (sam@conservationxlabs.org)
2021-03-11 12:14:20

*Thread Reply:* I am personally a fan of Google Coral ecosystem - very easy to use. As with any edge device, figuring out a way to keep the power down is the key! (This is something we have been working on at CXL) I would be more than happy to jump on a call with you to help out. just DM me

Grace Hansen (ghansen33@gatech.edu)
2021-03-11 12:16:48

*Thread Reply:* Thank you! That would be great, we will message

Rosho (rbam.vc@gmail.com)
2021-03-15 08:42:12

*Thread Reply:* TrailGuard AI project has used an Intel’s movidius myriad 2 VPU . Edge devices, you can also test: nvidia jetson nano. I made a camera trap with the Google Coral devboard.

Sara Beery (sbeery@caltech.edu)
2021-03-11 12:44:08

iWildCam 2021 is LIVE!!! https://www.kaggle.com/c/iwildcam2021-fgvc8

"Camera traps enable the automatic collection of large quantities of image data. Ecologists all over the world use camera traps to monitor biodiversity and population density of animal species. In order to estimate the abundance and density of species in camera trap data, ecologists need to know not just which species were seen, but also how many of each species were seen. However, because images are taken in motion-triggered bursts to increase the likelihood of capturing the animal(s) of interest, object detection alone is not sufficient as it could lead to over or undercounting. For example, if you get 3 images taken at one frame per second and in the first image you see 3 gazelles, in the second you see 5 gazelles, and in the last you see 4 gazelles, how many total gazelles have you seen? This is more challenging than strictly detecting and categorizing species as it requires reasoning and tracking of individuals across sparse temporal samples."

Please share to your communities, and if you're on twitter help me out by sharing our twitter thread! https://twitter.com/sarameghanbeery/status/1370065404148678657

kaggle.com
📸 Stefan Schneider, Declan, aruna, Oisin Mac Aodha, Henrik Cox (Sentinel), Daniel Grzenda, Omiros Pantazis, Armin Bazarjani, Gyri Reiersen, Olivier Gimenez, Rosho, Dan Morris
📷 Ixchel Meza, Armin Bazarjani, Gyri Reiersen
gvanhorn (grv22@cornell.edu)
2021-03-11 13:03:06

Similarly, the iNat Competition is back for 2021!

https://www.kaggle.com/c/inaturalist-2021/overview https://github.com/visipedia/inat_comp/tree/master/2021

```After taking a year out, we are excited to announce the latest iNaturalist Species Classification Challenge. This challenge is part of the Eighth Workshop on Fine-Grained Visual Categorization (FGVC8) at CVPR 2021.

The iNat Challenge 2021 dataset contains 10,000 species, with a training dataset of 2.7M images that have been collected and verified by multiple users from iNaturalist. There is also a more manageable "mini" dataset with 50 images per species, for a total of 500K training images. The dataset features many visually similar species, captured in a wide variety of situations, from all over the world.

We have made several modifications to the competition this year. Similar to the 2017 competition, we are releasing the species names immediately, instead of obfuscating them. Our reason for obfuscating them in 2018 and 2019 was to make it difficult for competitors to scrape the web (or iNaturalist itself) for additional images. Because we are releasing 2.7M training images and the dataset doesn't necessarily focus on the long tail problem we feel that we can release the species names without worry. This does not mean that scraping is allowed. Please do not scrape for additional data, especially from iNaturalist or GBIF. Having the species names also makes interpreting validation results easier when examining confusion matrices and accuracy statistics.

We are also releasing location and date information for each image in the form of latitude, longitude, location_uncertainty, and date values. We have retroactively added this information to the 2017 and 2018 datasets, but this year competitors are able to utilize this information when building models. We hope this motivates competitors to devise interesting solutions to this large scale problem. You will find more information on the challenge Github page.

We are looking forward to seeing the creative solutions you come up with!``` Please share to your communities! https://twitter.com/oisinmacaodha/status/1370050350779150340

kaggle.com
🌿 Oisin Mac Aodha, Sara Beery, Omiros Pantazis, Armin Bazarjani, Gyri Reiersen
🦌 Oisin Mac Aodha, Sara Beery, Armin Bazarjani, Gyri Reiersen
🐦 Oisin Mac Aodha, Sara Beery, Elijah Cole (Deactivated), Armin Bazarjani, Gyri Reiersen
:giraffe_face: Daniel Grzenda, Sara Beery, Armin Bazarjani, Gyri Reiersen
:zebra_face: Daniel Grzenda, Sara Beery, Armin Bazarjani, Gyri Reiersen
🎉 Sara Beery, Armin Bazarjani, Gyri Reiersen
🌎 Ixchel Meza, Armin Bazarjani, Gyri Reiersen
👍 Holger Klinck, Armin Bazarjani, Gyri Reiersen, Olivier Gimenez, Rosho
Océane (boulaisoceane@gmail.com)
2021-03-15 19:32:23

Howdy folks! Is anyone here aware of a fish (underwater) dataset for fine-grained categorization/verification?

Sara Beery (sbeery@caltech.edu)
2021-03-15 19:33:23

*Thread Reply:* I just saw https://www.frontiersin.org/articles/10.3389/fmars.2021.629485/full

Frontiers
🐠 Océane
Sara Beery (sbeery@caltech.edu)
2021-03-15 19:33:49

*Thread Reply:* Not sure how many species, so it may not be very fine-grained

Sara Beery (sbeery@caltech.edu)
2021-03-15 19:35:24

*Thread Reply:* And there was a fish task in LifeClef a while ago https://www.imageclef.org/lifeclef/2015/fish

Elizabeth Madin (emadin@hawaii.edu)
2021-03-15 20:52:10

*Thread Reply:* We may have a relevant dataset...will send details when I reply to your email, but feel free to remind me if I forget.

🙌 Océane
Dan Morris (agentmorris@gmail.com)
2021-04-01 12:29:17

*Thread Reply:* Depending on exactly what you're looking for, here are a few more:

https://github.com/Microsoft/Project_Natick_Analysis/releases/tag/annotated_data

https://www.fishnet.ai/

http://www.inf-cv.uni-jena.de/fine_grained_recognition.html#datasets (link is called "Croatian Fish Dataset")

...and, yes, of course there's a dataset called "DeepFish":

https://alzayats.github.io/DeepFish/

...in fact it's a pretty amazing dataset!

fishnet.ai
inf-cv.uni-jena.de
alzayats.github.io
Dan Morris (agentmorris@gmail.com)
2021-03-22 14:44:08

New data set on LILA containing ~100k bounding boxes on trees in drone images, including classification of invasive moth damage on a subset of those bounding boxes.

http://lila.science/datasets/forest-damages-larch-casebearer/

Courtesy of the Swedish Forest Agency.

LILA BC
Written by
lilawp
Est. reading time
3 minutes
👍 gvanhorn, Oisin Mac Aodha, Srishti, Vienna Saccomanno, Benjamin Kellenberger, Riccardo de Lutio, Mitch Fennell, David, Halil Radogoshi, Chris Yeh, Sara Beery, Gyri Reiersen
👀 David
Ben Weinstein (benweinstein2010@gmail.com)
2021-03-22 15:24:55

*Thread Reply:* Ooo! Me! I will download and train on this soon

Ben Weinstein (benweinstein2010@gmail.com)
2021-03-23 11:49:08

*Thread Reply:* This is what a sample image looks like. If I get a moment I will generate a single csv file so others don't need to parse the xml.

❤️ Halil Radogoshi, Gyri Reiersen
Lily Xu (lily_xu@g.harvard.edu)
2021-03-22 18:08:27

Consider adding EAAMO to your annual rotation of conference deadlines! Your work on AI for conservation (both technical and applied) will be very welcome.

Stemming from the Mechanism Design for Social Good (MD4SG) initiative, the inaugural ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO’21) aims to highlight work where techniques from algorithms, optimization, and mechanism design, along with insights from the social sciences and humanistic studies, can help improve equity and access to opportunity for historically disadvantaged and underserved communities.

Website: https://eaamo.org/ Submission date: June 3 Conference: October 5–9

eaamo.org
eaamo.org
👍 Caleb Robinson, Océane, David, Chris Yeh, Sara Beery
👀 Caleb Robinson
Halil Radogoshi (halil.radogoshi@skogsstyrelsen.se)
2021-03-23 02:44:49

Great to meet all of you on this channel. My name is Halil Radogoshi and I am a Senior Advisor on knowledge management and AI at the Swedish Forest Agency. We have started several AI projects at the agency and the projects mostly deal with forest damages. Here is a link to a video https://www.youtube.com/watch?v=WcVzys6ECys from one of our projects. Looking forward to exchange experience on this channel. Thank you @Dan Morris for inviting me here.

YouTube
} Skogsstyrelsen (https://www.youtube.com/user/Skogsstyrelsen)
🌳 Omiros Pantazis, Rosho, Sara Beery, Gyri Reiersen
👏 Srishti, Gyri Reiersen
👋 Petar Gyurov, Gyri Reiersen
Oisin Mac Aodha (macaodha@caltech.edu)
2021-03-29 09:05:08

Hi everyone. Just a reminder that the deadline for papers for the workshop on Fine-Grained Visual Categorization is this Friday. You can find more info on our website: https://sites.google.com/view/fgvc8

sites.google.com
:bearid: Omiros Pantazis, Subhransu Maji, Sara Beery, Océane
Sara Beery (sbeery@caltech.edu)
2021-03-31 14:42:33

Our CVPR 2021 paper just went up on arxiv! We investigate supervised and self-supervised representation learning for efficient adaptation to a diverse set of real-world Conservation/Ecology tasks, such as detecting specific animal behaviors, age, or health issues. The paper was led by @gvanhorn, along with @Elijah Cole (Deactivated), myself, Kimberly Wilber, Serge Belongie, and @Oisin Mac Aodha. Check it out!

https://twitter.com/oisinmacaodha/status/1377286648489201668

👍 Oisin Mac Aodha, Holger Klinck, Lily Xu, Rosho, gvanhorn, Stefan Schneider, Hemal Naik, Riccardo de Lutio, Dan Morris
🎉 Vienna Saccomanno, Omiros Pantazis, Stefan Schneider, Mitch Fennell, Talia Speaker, Catherine, Océane, Hannah Yin
Ben Weinstein (benweinstein2010@gmail.com)
2021-03-31 16:28:30

*Thread Reply:* so good.

😍 Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2021-03-31 16:30:20

*Thread Reply:* I can't quite find a list of tasks, was 'is this hummingbird visiting a flower' an example, or an actual task?

Ben Weinstein (benweinstein2010@gmail.com)
2021-03-31 16:30:32

*Thread Reply:* I have a few dozen TB of video that throw at that question.

Sara Beery (sbeery@caltech.edu)
2021-03-31 16:30:44

*Thread Reply:* An actual task.

👍 Ben Weinstein
Hemal Naik (hnaik@ab.mpg.de)
2021-04-01 10:38:27

*Thread Reply:* Awesome work congratulations

Holger Klinck (hk829@cornell.edu)
2021-04-01 11:18:56

BirdCLEF 2021 competition is live: https://www.kaggle.com/c/birdclef-2021/overview

🐦 Oisin Mac Aodha, Riccardo de Lutio, Declan, Daniel Grzenda
Dan Morris (agentmorris@gmail.com)
2021-04-01 12:20:42

New data set on LILA; ~50k bounding boxes on bees and pollen:

http://lila.science/datasets/boxes-on-bees-and-pollen

...courtesy of the BeeLivingSensor project.

LILA BC
Written by
lilawp
Est. reading time
1 minute
😍 Sara Beery, aruna, Rosho, Océane
🐝 Declan, Riccardo de Lutio, Omiros Pantazis, Carly Batist, aruna, Océane
Riccardo de Lutio (riccardo.delutio@geod.baug.ethz.ch)
2021-04-01 12:32:52

Hi everyone, the Herbarium Challenge 2021 is currently running! 🌸🌿 Still two months to go before the final submission deadline. https://www.kaggle.com/c/herbarium-2021-fgvc8/overview

kaggle.com
🌿 Sara Beery, Omiros Pantazis, Declan, Yumna
👍 Rosho
Sara Beery (sbeery@caltech.edu)
2021-04-06 13:18:12

Just came across this cool list of "fish tech" projects: https://docs.google.com/spreadsheets/d/1G4XX7WB5dt4D5SFQmecEKVk2xSDRxWVpPOgTDHZf9-M/edit?usp=sharing

😎 Jon Van Oast
🐟 Carly Batist, Talia Speaker, Océane
Carly Batist (cbatist@gradcenter.cuny.edu)
2021-04-06 15:31:39

*Thread Reply:* thanks for sharing here! I posted it on Twitter but forget to share here as well

Ben Best (ben@ecoquants.com)
2021-04-08 14:44:47

*Thread Reply:* Hi Carly, do you manage this? Very cool to see all these! Here’s another one: https://www.sharkeye.org/

Ben Best (ben@ecoquants.com)
2021-04-08 14:46:12

*Thread Reply:* And https://www.fishnet.ai/

fishnet.ai
Carly Batist (cbatist@gradcenter.cuny.edu)
2021-04-09 09:42:38

*Thread Reply:* I don’t! I just came across it and wanted to share. I believe it is managed by Kate Wing? kate@katewing.net, https://www.katewing.net/projects

Kate Wing
Ted Schmitt (teds@allenai.org)
2021-04-06 14:51:07

Wow, we are clearly doing a bad job of making Skylight visible. Do you know the owner of the list? This is a good list but I see several things missing. Is this something they want to share and have the “community” add to?

vulcan.com
Sara Beery (sbeery@caltech.edu)
2021-04-06 15:01:38

*Thread Reply:* I'm sure they would love additions (or at least I would if I were them!). but I don't know them personally.

Looks like the owner is Kate Wing, maybe you could reach out to her directly? https://www.linkedin.com/in/kate-wing-11b7029

Sara Beery (sbeery@caltech.edu)
2021-04-06 15:02:03

*Thread Reply:* https://www.katewing.net/

Sara Beery (sbeery@caltech.edu)
2021-04-06 15:02:28

*Thread Reply:* kate@katewing.net

Ted Schmitt (teds@allenai.org)
2021-04-06 18:05:14

*Thread Reply:* Cool, thanks

Petar Gyurov (pgyurov93@gmail.com)
2021-04-07 09:30:36

Wondering if anyone has any conservation (or related) contacts/projects in Costa Rica :flag_cr: ? Will be travelling there soon and I'm interested in opportunities and networking. Cheers!

🤩 Sara Beery
Carly Batist (cbatist@gradcenter.cuny.edu)
2021-04-07 09:43:04

*Thread Reply:* definitely reach out to the folks at Osa Conservation! https://osaconservation.org/ They do a ton of work in the Osa peninsula region, working with communities to conserve forests, wetlands, coasts, and they have a sustainable ag program

Osa Conservation
Est. reading time
3 minutes
👍 Petar Gyurov, Mitch Fennell
🌿 Lily Xu
Gyri Reiersen (gyri.reiersen@tum.de)
2021-04-08 07:02:02

*Thread Reply:* @David?

David (dwddao@gmail.com)
2021-04-08 07:50:32

*Thread Reply:* I’m currently at La Cotinga in Osa 🙂 but heading towards a large restoration project at the border of Panama, send me a DM!

😃 Carly Batist
Ben Weinstein (benweinstein2010@gmail.com)
2021-04-08 15:26:20

I think I would have known, but is anyone else creating drone images over bird colonies in the group? I'm testing the waters for a generalization study. I've got 4 datasets (one since below). Annotated or unannotated. Let me know if there are people to contacts.

Sara Beery (sbeery@caltech.edu)
2021-04-08 15:27:25

*Thread Reply:* Did you talk to @Benjamin Kellenberger?

Ben Weinstein (benweinstein2010@gmail.com)
2021-04-08 15:27:35

*Thread Reply:* ya, i wrote him.

Sara Beery (sbeery@caltech.edu)
2021-04-08 15:29:48

*Thread Reply:* Also I know you chatted with Mark Koneff, but just in case I think there might be several others with datasets in the Community of Practice on migratory bird surveys he's running. Maybe we could email that group?

👍 Ben Weinstein
Ben Weinstein (benweinstein2010@gmail.com)
2021-04-08 15:30:11

*Thread Reply:* I talked to FWS this morning, but didn't know there was a group email.

Sara Beery (sbeery@caltech.edu)
2021-04-08 15:30:40

*Thread Reply:* He has a mailing list for everyone who has been involved

Sara Beery (sbeery@caltech.edu)
2021-04-08 15:31:02

*Thread Reply:* Could probably pass on a request if you write up a blurb

Ben Weinstein (benweinstein2010@gmail.com)
2021-04-08 15:32:01

*Thread Reply:* k, i'll DM you.

Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2021-04-09 02:27:14

*Thread Reply:* Hi! I can answer @Ben Weinstein’s question here: our birds dataset over West Africa is publicly available on lila.science: http://lila.science/datasets/aerial-seabirds-west-africa/ The accompanying paper can be found under this link: 21 000 birds in 4.5 h: efficient large‐scale seabird detection with machine learning - Kellenberger - - Remote Sensing in Ecology and Conservation - Wiley Online Library

LILA BC
Written by
lilawp
Est. reading time
2 minutes
Ben Weinstein (benweinstein2010@gmail.com)
2021-04-09 11:42:28

*Thread Reply:* my bad! its not listed here http://lila.science/datasets

Ben Weinstein (benweinstein2010@gmail.com)
2021-04-09 11:43:50

*Thread Reply:* @Dan Morris or @Siyu Yang, bump to add to front page. Is there another way to look at this list besides the dataset tab. In case I missed anything else.

Dan Morris (agentmorris@gmail.com)
2021-04-09 12:10:25

*Thread Reply:* Oops! Fixed.

Dan Morris (agentmorris@gmail.com)
2021-04-09 12:11:15

*Thread Reply:* Those of you who have contributed data to LILA know that we send out a password-protected preview page first; when we make the page live, we have to un-check two boxes (for password and listing), and I forgot to un-check one of the two boxes.

Ben Weinstein (benweinstein2010@gmail.com)
2021-04-09 12:14:48

*Thread Reply:* thanks for all this organization work, it is crazy important. I'm rounding up UAV datasets for bird detections. No idea yet is atleast a baseline model is tangible in this area, the problem is pretty unconstrained across taxa, background, spatial resolution, etc.

David Will (david.will@islandconservation.org)
2021-04-14 09:41:30

*Thread Reply:* There were a number of us interested in the same question for monitoring seabirds on islands in the pacific - and started the #ai4pacificislandseabirds channel. @Mari Reeves @Dena @Maddie Hayes @Vienna Saccomanno all have active projects monitoring seabirds. We could also help connect you with other datasets - let me know if you want to connect.

😍 Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2021-04-14 11:43:36

*Thread Reply:* Thanks @David Will, I have data from @Vienna Saccomanno already. I definitely still look for more data, either annotated (preferably) or unannotated that fits the general specs of visual birds in UAV imagery <3cm resolution.

🙌 Vienna Saccomanno
Sara Beery (sbeery@caltech.edu)
2021-04-29 16:59:42

*Thread Reply:* Anyone looked at an approach like this for counting birds? https://arxiv.org/abs/2104.08391

arXiv.org
Ben Weinstein (benweinstein2010@gmail.com)
2021-04-29 17:07:31

*Thread Reply:*

👍 Sara Beery
Mari Reeves (mari_reeves@fws.gov)
2021-05-21 16:36:36

*Thread Reply:* I'm interested in this for boobies in Lehua. Can we talk?

Lily Xu (lily_xu@g.harvard.edu)
2021-04-09 13:36:05

NeurIPS 2021 will include a Datasets and Benchmarks Track. Might be a great opportunity to get some wildlife and ecological datasets out to the broader ML community!

https://neuripsconf.medium.com/announcing-the-neurips-2021-datasets-and-benchmarks-track-644e27c1e66c

Medium
Reading time
4 min read
👍 Sara Beery, Elijah Cole (Deactivated), Océane
Silvia Zuffi (silvia@mi.imati.cnr.it)
2021-04-11 08:08:51

April 13, 2021 9:00 AM - 9:50 AM CEST Where Conservation Meets AI Enabled Vision [E32301]

Carl Chalmers Senior Lecturer in Machine Learning and Applied Artificial Intelligence, Liverpool John Moores University Paul Fergus Reader in Machine Learning, Liverpool John Moores University In response to an escalating crisis of wildlife poaching, the Conservation AI project aims to harness machine learning with a focus on video and image analytics to gain insights into animal/people interactions in the wild. We’ll present our work and the technologies implemented in our conservationai.co.uk system including projects with Knowsley Safari in the UK, the Endangered Wildlife Trust in South Africa, and the Greater Mahale Ecosystem Research and Conservation team. The session will cover both accelerated training and inference pipelines using Docker, TensorFlow-serving using NVIDIA Quadro RTX 8000, Tesla T4-GPU and NVIDIA cuDNN.

👍 Oisin Mac Aodha, Sara Beery, Ted Schmitt
👀 Srishti, Rosho
Silvia Zuffi (silvia@mi.imati.cnr.it)
2021-04-11 08:09:42

The above is an event at GTC

Ben Weinstein (benweinstein2010@gmail.com)
2021-04-15 13:35:31

@Oisin Mac Aodha or @Elijah Cole (Deactivated), I was just talking with @Vienna Saccomanno about species classification with human guided inference. i'm rereading your paper (https://arxiv.org/abs/1906.05272) on geography priors and thinking about how to do a human-constructed prior for predicting into novel areas. Given that we may incomplete geographic information in train, or that test data might include areas outside of train, it would be useful to ask a researcher to say, what class do you expect in location X and then either use the embeddings you have done, or a more formal bayesian model (which was my plan) in jags/bugs/stan that is a function of a feature layer (e.g right before the softmax in a vanilla CNN). @gvanhorn may have thoughts here.

arXiv.org
Sara Beery (sbeery@caltech.edu)
2021-04-15 13:37:08

*Thread Reply:* One way we've thought about it is incorporating species distribution models from something like Map of Life. @Elijah Cole (Deactivated) or @Kevin Winner would be able to say more 🙂

👍 Vienna Saccomanno, Riccardo de Lutio
Oisin Mac Aodha (macaodha@caltech.edu)
2021-04-15 13:49:35

*Thread Reply:* Hey @Ben Weinstein! We have had similar thoughts, but the closest we have currently gotten is to use of externally generated range maps for eval (a bit like @Sara Beery's comment). These are still not perfect e.g. if you click on the Map tab halfway down this page, you see that there have been observations in regions that are not inside the range (pink area). https://www.inaturalist.org/taxa/42412-Ovibos-moschatus

Oisin Mac Aodha (macaodha@caltech.edu)
2021-04-15 13:50:34

*Thread Reply:* Obviously, this is a very anecdotal example, but issues of this type as well as "range creep" do occur:

Ben Weinstein (benweinstein2010@gmail.com)
2021-04-15 13:51:06

*Thread Reply:* yeah, its interesting, your problem is incredibly hard because you couldn't have a person write a prior for every grid cell (though a hand drawn map would be cool).

Ben Weinstein (benweinstein2010@gmail.com)
2021-04-15 13:51:30

*Thread Reply:* for @Vienna Saccomanno she has islands that are a fixed unit and the species classes occur in somewhat known combinations at each island.

Ben Weinstein (benweinstein2010@gmail.com)
2021-04-15 13:51:47

*Thread Reply:* not every island is in train, or else we would just do a site level embedding.

Oisin Mac Aodha (macaodha@caltech.edu)
2021-04-15 13:52:05

*Thread Reply:* The other question would be, is there a way to do the human in the loop version efficiently for tens of thousands of species. Im also assuming that the common (i.e. easy to guess the range of species) are well represented in your dataset anyway.

Ben Weinstein (benweinstein2010@gmail.com)
2021-04-15 13:52:16

*Thread Reply:* so I was thinking about having her parameterize a multinomial distribution by hand.

Oisin Mac Aodha (macaodha@caltech.edu)
2021-04-15 13:52:18

*Thread Reply:* The island example is a great one.

Oisin Mac Aodha (macaodha@caltech.edu)
2021-04-15 13:53:06

*Thread Reply:* The current hack would be to just manually generate some "pseudo positive" training examples and add them to the training set.

👍 Ben Weinstein, Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2021-04-15 13:53:33

*Thread Reply:* rather than going to https://botorch.org/?

botorch.org
Oisin Mac Aodha (macaodha@caltech.edu)
2021-04-15 13:55:28

*Thread Reply:* The challenge with any of this within the current framework (bayesian or not) is that I suspect it will be very difficult to influence the output of the network with a very small number of examples in very localized regions. The network will basically have to devote capacity to remembering just those observations.

👍 Ben Weinstein, Vienna Saccomanno
Heather Lynch (heather.lynch@stonybrook.edu)
2021-04-15 16:40:52

*Thread Reply:* I think integrating HSM (vis a vis @Sara Beery’s suggestion) is really the way to go here. If ecologists believe those HSMs actually mean something, they should be useful for narrowing down a species identification.

👍 Oisin Mac Aodha
Sara Beery (sbeery@caltech.edu)
2021-04-19 15:13:48

Call for papers for ICML Climate Change AI Workshop:

"Climate Change AI will be hosting a virtual workshop on climate change and machine learning alongside the ICML 2021 machine learning conference in July. The goal of this workshop is to facilitate networking and exchange between those working in machine learning and those working in energy/climate-related areas.

There are several ways to participate: • Be a mentor or mentee in our submission mentorship program (applications due April 28th) • Submit a paper or proposal on your work at the intersection of climate change and machine learning (submissions due May 31st) • Attend! The workshop will be virtual on July 23rd or 24th, 2021.   Thinking about submitting? We are holding webinars to explain what we are looking for on Friday, April 23rd (at 1.30PM Eastern Time / 6:30 PM London time) and Tuesday, April 27th (at 9:00 AM Eastern Time /* *2:00 PM London time / 9:00 PM Beijing Time). We will give advice on how to prepare a successful submission, and an opportunity to ask questions regarding the mentorship program.

For more details, see: https://www.climatechange.ai/events/icml2021.html or reach out to climatechangeai.icml2021@gmail.com with any questions"

Climate Change AI
Climate Change AI
👍 Oisin Mac Aodha, Elijah Cole (Deactivated), Riccardo de Lutio, Scott Hosking, Hemal Naik, Gyri Reiersen
🎉 Jon Van Oast
Jon Van Oast (jon@wildme.org)
2021-04-24 22:32:54

thought this might be of interest to folks here who might be curious about combining their research with artistic expression. proposals due by may 2. https://zkm.de/en/open-call-viable-data-remote-residency

ZKM
👍 Sara Beery, Frederic, Riccardo de Lutio
Anne Dangerfield (anne@arribada.org)
2021-04-27 12:33:28

Hello all! I work with the Arribada Initiative on developing projects that combine affordable tech and AI for conservation. Great to be on the channel!

👋 Declan, Jason Holmberg (Wild Me), Lily Xu, Sara Beery, Carly Batist, Thijs, Alasdair Davies
👀 Alasdair Davies
💚 Alasdair Davies
John Payne (drjohnpayne@gmail.com)
2021-04-29 16:17:55

Is anyone else working with Detectron2? I’m trying to solve a problem with multi-GPU inference, which it wasn’t designed for. I’ve been using it for a year so I can reciprocate with my own hard-won knowledge of it.

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:22:24

Hi everyone --- I'm Barry Brook from the University of Tasmania. I run an extensive network of camera traps, and use the MegaDetector as part of my image-processing pipeline. My group is also developing a species classifier in EfficientNet.

👋 Sara Beery, Riccardo de Lutio, Ștefan Istrate, Tomer Nahshon
👍 Stuart Neilon
Sara Beery (sbeery@caltech.edu)
2021-04-29 21:26:48

*Thread Reply:* Awesome! I love hearing that MegaDetector is useful 🙂

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:27:38

*Thread Reply:* Extremely! And it's accurate too, even for finding creatures in deep, dark Tasmanian rain forests ;)

😍 Sara Beery
Sara Beery (sbeery@caltech.edu)
2021-04-29 21:28:10

*Thread Reply:* How is the species ID going?

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:30:39

*Thread Reply:* Our latest version is very good, and we're working on various ways to improve it further. We recently did a test on 22K out-of-sample images. This was the result (top-1):

Sara Beery (sbeery@caltech.edu)
2021-04-29 21:31:24

*Thread Reply:* Wow, that's fantastic! Those numbers look great 🙂

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:31:47

*Thread Reply:*

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:31:55

*Thread Reply:* Just a better format, sorry for the replacement.

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:32:03

*Thread Reply:* Yes, we're very happy with it's performance!

Sara Beery (sbeery@caltech.edu)
2021-04-29 21:32:33

*Thread Reply:* Super impressive for the Bassian thrush with 27 examples

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:32:46

*Thread Reply:* It's currently trained on ~300K expert-labelled images, but next round will be on >500K, with some new categories added, and higher version of EfficientNet (the above is B3).

👍 Sara Beery
Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:33:10

*Thread Reply:* Oh, remember, that was just for the test set. It was trained on about 2K Bassian images.

Sara Beery (sbeery@caltech.edu)
2021-04-29 21:33:23

*Thread Reply:* ahhhhh gotcha, I thought that was training n

Sara Beery (sbeery@caltech.edu)
2021-04-29 21:33:32

*Thread Reply:* Still awesome 🙂

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:34:00

*Thread Reply:* It is, given that Bassian Thrushes are actually super camouflage experts!

Sara Beery (sbeery@caltech.edu)
2021-04-29 21:34:59

*Thread Reply:* How do you use MegaDetector in combo with this model? Are you training on crops? Or is it more of an ensemble approach?

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:36:08

*Thread Reply:* Exactly. We take the MegaDetector Crops, and resize them to 256px for EfficientNet. Then we write the species-label to an unused metadata field on the image, and upload to Camelot for final human verification.

👍 Olivier Gimenez
Sara Beery (sbeery@caltech.edu)
2021-04-29 21:36:36

*Thread Reply:* Oh, very cool that your workflow integrates with Camelot too!

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:37:36

*Thread Reply:* Yep, it's an essential part of the pipeline. We then analyse the data by importing the Camelot database (full export) in to R, for ecological metrics etc.

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:38:16

*Thread Reply:* With the images 'pre-classified' by our EfficientNet model, the human labelling of images in Camelot is about 10-50 times faster than if we were presented with the usual mixed bag one would get from a direct import of images.

Sara Beery (sbeery@caltech.edu)
2021-04-29 21:39:01

*Thread Reply:* I love that you have human verification built in to your pipeline, and that the species labels help with speedup. Have you thought about how to make humans more efficient? So far I still feel like we need to have human eyes on the IDs before using them to help mitigate ML biases, but it's hard to scale.

Sara Beery (sbeery@caltech.edu)
2021-04-29 21:40:27

*Thread Reply:* I've been thinking about ways to prioritize corrections/verification or potentially use a subset as quality control to estimate error, but everything is so imbalanced that it's hard to do robustly

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:40:38

*Thread Reply:* Well, with the Camelot interface, a human is presented with 150 thumbnails on the left sidebar, which on a large screen still show quite a bit of detail. These can then be bulk-selected, and then the human simply has to look for anomalies within the 150, and deselect those. This is what makes it so fast for us.

Sara Beery (sbeery@caltech.edu)
2021-04-29 21:41:34

*Thread Reply:* That's awesome. Have you done any analysis of whether this causes any bias against small animals?

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:41:39

*Thread Reply:*

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:41:43

*Thread Reply:* This is an example search for class 3 (Tasmanian devils).

👍 Sara Beery
Sara Beery (sbeery@caltech.edu)
2021-04-29 21:42:01

*Thread Reply:* Very Tasmania 🙂

Sara Beery (sbeery@caltech.edu)
2021-04-29 21:42:23

*Thread Reply:* Re: my above comment, just thinking about thumbnails

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:42:26

*Thread Reply:* Indeed, haha.

Sara Beery (sbeery@caltech.edu)
2021-04-29 21:42:37

*Thread Reply:* and also, do humans verify the empties too?

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:42:55

*Thread Reply:* It might be that some small mammals are overlooked, yes, although the detector does a good job at finding/boxing them in general, and the boxes are all quite visible on the thumbnails.

👍 Sara Beery
Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:43:08

*Thread Reply:* Yes, all the empties are checked.

Sara Beery (sbeery@caltech.edu)
2021-04-29 21:43:54

*Thread Reply:* that's rad. Do you have any stats on corrections? How many missed animals you see?

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:44:35

*Thread Reply:* After we run the detection over our images, we partition each camera into “animal”, “human” and “blank” folders. I then manually scan the blank folder, and pick up any of the rejections and move them over to the animals folder. It doesn’t take too long to do this (typically an hour or two on a 25K image service), and it avoids having lots of false detections in the final upload to Camelot (where they would otherwise have to be rejected by a person anyway). So it’s a trade-off as to where we want the effort focused --- I like to have a clean dataset going into the database!

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:44:52

*Thread Reply:* in the last set of images we processed, we set its confidence threshold to 95%. For 23,894 new images, it failed to detect 1,054 real images (4.7%) and falsely detected 331 (1.4%). Pretty good!

Sara Beery (sbeery@caltech.edu)
2021-04-29 21:45:14

*Thread Reply:* that's really cool 🙂

Sara Beery (sbeery@caltech.edu)
2021-04-29 21:45:25

*Thread Reply:* Thanks for answering so many questions!!

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:45:37

*Thread Reply:* My pleasure -- thanks for all your work on this Sara!

❤️ Sara Beery
Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:46:24

*Thread Reply:* We were talking in our group about ways the detector could be further improved, with a view to helping classifiers. The ideal would be if a future ‘bounding box’ was not a rectangle, but a complex polygonal outline (i.e., tracing, even roughly, the animal profile). This would mean the image information about the animal would be extracted with as little ‘noise-injecting’ background as possible (even with tight boxes, often up to a half of the pixels is still non-animal). But I acknowledge that such an approach might be quite difficult/tedious to train!

Sara Beery (sbeery@caltech.edu)
2021-04-29 21:47:54

*Thread Reply:* Sooooo, if you want to try something fun, test your data on this demo: https://github.com/tensorflow/models/blob/master/research/object_detection/colab_tutorials/deepmac_colab.ipynb

Sara Beery (sbeery@caltech.edu)
2021-04-29 21:48:23

*Thread Reply:* It does class-agnostic segmentation if you provide boxes, and I've found it works astonishingly well on my CT data

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:49:11

*Thread Reply:* OK, we'll give it a go!

Sara Beery (sbeery@caltech.edu)
2021-04-29 21:49:25

*Thread Reply:* Could be fun to use it as weak attention for a classifier, or use this as a rough way to build the polygon labels you mentioned without needing to explicitly collect a bunch of them

Sara Beery (sbeery@caltech.edu)
2021-04-29 21:50:05

*Thread Reply:* fun little research project 🙂

Barry Brook (barry.brook@utas.edu.au)
2021-04-29 21:50:15

*Thread Reply:* Indeed!

Ritwik (rittyun@yahoo.com)
2021-04-30 04:50:13

*Thread Reply:* great stuff👍

Dan Morris (agentmorris@gmail.com)
2021-05-01 21:31:00

New data set on LILA, courtesy of NOAA Fisheries:

http://lila.science/datasets/noaa-arctic-seals-2019/

~28k bounding boxes on seals in ~80k aerial images (color + IR).

Not to be confused with the previous big data set of seals and bounding boxes on LILA. This newer data set represents totally new images from a more recent survey, and a substantial improvement in both the precision of the annotations and the precision of the color/IR alignment.

LILA BC
Written by
lilawp
Est. reading time
2 minutes
🦭 Sara Beery, Barry Brook
👏 Alex Borowicz
Jes Lefcourt (jesl@vulcan.com)
2021-05-03 14:43:25

Shameless plug, but there are so few jobs in this field that I figure it's also a public service announcement: The EarthRanger team has three open job positions in Seattle for people who are passionate about conservation and conservation technology: • UI/UX Designer • Mobile Software Development Lead • Software QA Lead If you know of anyone interested, please let me know and/or direct them to https://vulcan.com/Careers.aspx . Thanks!

vulcan.com
😍 Sara Beery, Gyri Reiersen
👍 Oisin Mac Aodha, Carly Batist, Gracie Ermi
🙌 Tanya Birch
Anne Dangerfield (anne@arribada.org)
2021-05-04 11:32:42

*OPPURTUNITY* Arribada Initiative and our partners are looking for ML experts to help ID individual gorillas. If anyone is interested or has a connection to someone who could help, please send a DM to @Anne Dangerfield or @Alasdair Davies

❤️ Sara Beery, Bistra Dilkina, Mike C
🦍 Alasdair Davies, Talia Speaker
Sara Beery (sbeery@caltech.edu)
2021-05-04 11:42:32

*Thread Reply:* @Maxime Vidal has been looking at Re-ID recently

Carly Batist (cbatist@gradcenter.cuny.edu)
2021-05-04 12:02:09

*Thread Reply:* Perhaps also worth posting on Wildlabs in the AI for conservation group?

👍 Sara Beery
Anne Dangerfield (anne@arribada.org)
2021-05-05 09:38:59

*Thread Reply:* @Carly Batist that's a good idea

😃 Carly Batist
Mike C (mike@mikecee.solutions)
2021-05-11 04:03:49

*Thread Reply:* @Utkarsh Goel from the OOCAM team may be able to help

Carly Batist (cbatist@gradcenter.cuny.edu)
2021-05-12 11:33:07

Potential event of interest - https://www.meetup.com/DataForGood-CorrelAid-X-Netherlands/events/277277935/

Meetup
Where
Online event
When
Wed, May 12, 2021, 7:30 PM
👍 Sara Beery, Lily Xu, Jon Van Oast, Thijs
Carly Batist (cbatist@gradcenter.cuny.edu)
2021-05-12 11:33:38

**note the website will open up in Dutch, but the event description is English and the event will be in English Sorry for the late notice, I just came across it myself!

Ben Weinstein (benweinstein2010@gmail.com)
2021-05-13 12:53:37

I know a couple projects here are using the tree data from our group. I cleaned up training data. @Rebekah Loving https://zenodo.org/record/4746605

😍 Sara Beery, Declan, Elijah Cole (Deactivated), Rebekah Loving, Gyri Reiersen
❤️ Halil Radogoshi
Rebekah Loving (rloving@caltech.edu)
2021-05-13 14:19:53

Thank you, @Ben Weinstein!

Sara Beery (sbeery@caltech.edu)
2021-05-14 11:38:57

Pretty awesome paper:

Priority list of biodiversity metrics to observe from space

Monitoring global biodiversity from space through remotely sensing geospatial patterns has high potential to add to our knowledge acquired by field observation. Although a framework of essential biodiversity variables (EBVs) is emerging for monitoring biodiversity, its poor alignment with remote sensing products hinders interpolation between field observations. This study compiles a comprehensive, prioritized list of remote sensing biodiversity products that can further improve the monitoring of geospatial biodiversity patterns, enhancing the EBV framework and its applicability. The ecosystem structure and ecosystem function EBV classes, which capture the biological effects of disturbance as well as habitat structure, are shown by an expert review process to be the most relevant, feasible, accurate and mature for direct monitoring of biodiversity from satellites. Biodiversity products that require satellite remote sensing of a finer resolution that is still under development are given lower priority (for example, for the EBV class species traits). Some EBVs are not directly measurable by remote sensing from space, specifically the EBV class genetic composition. Linking remote sensing products to EBVs will accelerate product generation, improving reporting on the state of biodiversity from local to global scales.

https://www.nature.com/articles/s41559-021-01451-x

Nature Ecology &amp; Evolution
😎 Jon Van Oast, Omiros Pantazis, Riccardo de Lutio, Ben Weinstein, Declan, Carly Batist, Tony Chang, Gyri Reiersen
📡 Stefan Schneider, Océane, aruna
Océane (boulaisoceane@gmail.com)
2021-05-18 12:17:58

*Thread Reply:* @aruna and @Björn Lütjens 👀

👀 aruna, Björn Lütjens
🙌 aruna
Justin Kay (justinkay92@gmail.com)
2021-05-19 17:02:17

*Thread Reply:* This sounds rad - anyone have a pdf version they can share?

Björn Lütjens (bjoern.luetjens@gmail.com)
2021-05-20 12:49:59

*Thread Reply:*

🙏 Océane, Justin Kay
Océane (boulaisoceane@gmail.com)
2021-05-19 15:36:46

Does anyone know how accepted abstracts will virtually present their posters for FGVC8 during CVPR21? Will they be assigned a specific time/zoom room where they’ll need to be available?

Oisin Mac Aodha (macaodha@caltech.edu)
2021-05-19 15:39:04

*Thread Reply:* Hi there. We will be sending out instructions very soon.

Océane (boulaisoceane@gmail.com)
2021-05-19 15:39:45

*Thread Reply:* Thank you @Oisin Mac Aodha! Response so speedy. much fast.

Oisin Mac Aodha (macaodha@caltech.edu)
2021-05-19 15:40:09

*Thread Reply:* PS the workshop is on June 25th.

🙌 Océane, Sara Beery
Océane (boulaisoceane@gmail.com)
2021-05-19 15:40:29

*Thread Reply:* Thanks! That gives me a window to plan around in.

Justin Kay (justinkay92@gmail.com)
2021-05-19 20:50:56

Hi everyone! I’m new here, figure I’ll introduce myself - I’m Justin Kay, I work on computer vision applications for fisheries and marine conservation/ecology through my company ai.fish. I’m interested in getting more involved in research in this area and I’m excited to learn more about all the other work going on - it’s exciting to see such a large community here (thanks for the invite @Sara Beery!), and some familiar names from my Twitter lurking…look forward to getting to know you all 🙂

❤️ Sara Beery, Caleb Robinson, aruna, Jason Holmberg (Wild Me), Bistra Dilkina, Oisin Mac Aodha, Aarnav Sawant, Erik Young
🐟 Carly Batist, Thijs, Omiros Pantazis, Mitch Fennell, Scott Hosking, Erik Young
👋 Benjamin Kellenberger, Ben Weinstein
Sara Beery (sbeery@caltech.edu)
2021-05-20 10:58:34

Nice article on the role of nature/biodiversity for human health (thanks @Oisin Mac Aodha for sharing!)

https://www.euro.who.int/en/health-topics/environment-and-health/pages/news/news/2021[…]nd-biodiversity-play-a-vital-role-in-protecting-human-health

👍 Oisin Mac Aodha, Omiros Pantazis, Mitch Fennell, Scott Hosking, Riccardo de Lutio, John Beuving
Vienna Saccomanno (v.r.saccomanno@tnc.org)
2021-05-20 17:25:47

Is anyone aware of an existing AI vegetation detection and species classification model for tree seedlings? The Nature Conservancy Palmyra has a restoration project that would benefit greatly from a veg cover map. We fly the island somewhat regularly with a Wingtra with 0.7cm GSD - example image attached. Thank you 🌴

Ben Weinstein (benweinstein2010@gmail.com)
2021-05-20 17:55:36

*Thread Reply:* No, I don't believe that exists. My tree model does an adequate, but not great job, you could retrain it. ```from deepforest import main from deepforest import visualize from matplotlib import pyplot as plt from skimage import io

m = main.deepforest() m.use_release() img = io.imread("/Users/benweinstein/Downloads/image.png")

Remove sneaky 4th transparent alpha channel

m.config["scorethresh"] = 0.01 boxes = m.predictimage(img[:,:,:3].astype("float32"), returnplot=False) boxes.label = 0 image = visualize.plotpredictions(img, boxes, color = (255,0,255)) plt.imshow(image)```

Vienna Saccomanno (v.r.saccomanno@tnc.org)
2021-05-21 13:11:12

*Thread Reply:* @Ben Weinstein this is great. Will follow up, thank you.

Ben Weinstein (benweinstein2010@gmail.com)
2021-05-24 13:48:31

I know i've had meetings with a number of people here on annotation platforms and workflows. Just FYI i just sat through the 45 minutes on azure data labeling, its feeling like something promising https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-labeling-projects. I'll probably be giving it a try in a few months if anyone wants to come back to this thread. We currently use a combination of Zooniverse and QGIS.

docs.microsoft.com
👍 Gyri Reiersen
Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2021-05-25 03:32:59

Related to @Ben Weinstein’s post I will shamelessly use this opportunity to officially introduce AIDE v2.0.

This version brings many new exciting functionalities, including: • New models: AIDE v2.0 integrates Detectron2 and comes with 14 new deep learning models built-in! Whether it’s ResNe(X)t for image classification or Faster R-CNN or RetinaNet for object detection, you’re now covered. • Model Marketplace: share your models across projects for maximum performance and reuse. Browse an ever-growing catalog for suitable pre-trained model states, upload your own from the disk, import public models from the Web, etc. • AIDE Model Zoo (teaser): AIDE will come with many pre-trained models for ecologists as time progresses, along with a new code base to train your own offline and publish to AIDE (coming soon). The first models on camera trap imagery are already in the making and will be made available soon, straight through the Model Marketplace! • Workflow Designer: create complex model training graphs in the Web browser with zero lines of code. • Much more: accuracy evaluation, progress monitoring, versatile data management, a million bug fixes, etc. You can get it for free here: https://github.com/microsoft/aerial_wildlife_detection

🎉 Siyu Yang, Riccardo de Lutio, Vienna Saccomanno, Bistra Dilkina, Sara Beery, Mitch Fennell
🙌 Diego Marcos, Bistra Dilkina, Sara Beery
👍 Dan Morris
Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2021-05-25 05:23:46

*Thread Reply:* I have shown a demo of the beta of AIDE v2.0 to some of you a couple of months ago. To the others (and to you, if you don’t mind a refresher): would you be interested in a demo session? In that case I’d gladly prepare one and communicate a day and time. Please quickly respond below or mark this comment with a ✅ if you’d like to see a demo. Thanks!

✅ Diego Marcos
Howard L Frederick (simbamangu@gmail.com)
2021-05-30 02:21:24

*Thread Reply:* @Benjamin Kellenberger fantastic news! Keen to set up with our models asap ..

Matt McCann (matthew.mccann13@gmail.com)
2021-05-25 08:56:22

Hey everyone, I'm new here (and to the field in general). I'm Matt McCann, currently a neurobiology PhD student and have a background in bioengineering. Right now I'm using computer vision and ML for pose segmentation/behavioral classification during predation (in the lab), as well as for more sophisticated ways of detecting neural signals in microscopy data. I'm looking to pivot into the conservation tech/AI space once I've finished my degree, and I'm interested in CV and remote sensing in the field, with applications to conservation. And a thank you to @Sam Kelly for pointing me towards this group.

👍 Sam Kelly, Omiros Pantazis, Sara Beery, Manish Rai
Koustubh Sharma (koustubh@snowleopard.org)
2021-05-27 08:21:14

Possibly a naive question, but would someone know of possible resources that one can use to detect objects in aerial images taken from drones?

Ben Weinstein (benweinstein2010@gmail.com)
2021-05-27 10:13:07

*Thread Reply:* Like training data? Or a prebuilt model? What kind of objects?

Carly Batist (cbatist@gradcenter.cuny.edu)
2021-05-27 10:13:26

*Thread Reply:* You might check out AIDE: https://github.com/microsoft/aerial_wildlife_detection

Stars
<p>118</p>
Language
<p>Python</p>
Carly Batist (cbatist@gradcenter.cuny.edu)
2021-05-27 10:14:01

*Thread Reply:*

Koustubh Sharma (koustubh@snowleopard.org)
2021-05-27 10:14:04

*Thread Reply:* First train, then identify. Marmot burrows in high altitude pastures

Koustubh Sharma (koustubh@snowleopard.org)
2021-05-27 10:14:38

*Thread Reply:* Thanks a ton for the resources Carly, will have a look

👍 Carly Batist
Koustubh Sharma (koustubh@snowleopard.org)
2021-05-27 10:32:33

*Thread Reply:* Downloading the resources from GitHub as we speak. Excited 😊

😃 Carly Batist, Sara Beery
Lily Xu (lily_xu@g.harvard.edu)
2021-05-27 14:32:44

*Thread Reply:* @Elizabeth Bondi has a series of work on detecting humans and animals from thermal infrared images!

Paper: https://projects.iq.harvard.edu/files/teamcore/files/2018_35_teamcore_spot_camera_ready.pdf Dataset: https://sites.google.com/view/elizabethbondi/dataset

sites.google.com
😊 Elizabeth Bondi, Sara Beery
Elizabeth Bondi (ebondi@g.harvard.edu)
2021-05-27 14:36:18

*Thread Reply:* Thanks @Lily Xu! Please let me know if you look into this and have questions, @Koustubh Sharma.

😊 Lily Xu
Koustubh Sharma (koustubh@snowleopard.org)
2021-05-27 21:05:46

*Thread Reply:* Hi Elizabeth, I have never used the docker, and am getting some errors when trying to run it on my computer. If not much of a problem, can I bug you sometime this week or the next for a little assistance on running the tool?

Carly Batist (cbatist@gradcenter.cuny.edu)
2021-05-27 10:35:05

WWF’s Dave Thau is doing a series on AI for conservation and managing planetary data - https://medium.com/g-ai-a/g-ai-a-artificial-intelligence-and-planetary-scale-environmental-management-e34f2571761e

Medium
Reading time
8 min read
😍 Sara Beery, Talia Speaker
👍 Casey Youngflesh, Riccardo de Lutio, Bistra Dilkina, Monty Ammar
Ben Weinstein (benweinstein2010@gmail.com)
2021-06-03 12:54:43

I've talked with a few people here about the bird detector for high-res airborne imagery. Here is a teaser with an early release model. It will continue to improve. All feedback welcome. https://colab.research.google.com/drive/1e9_pZM0n_v3MkZpSjVRjm55-LuCE2IYE?usp=sharing

colab.research.google.com
👍 Mikey Tabak, Gyri Reiersen
aruna (arunas@mit.edu)
2021-06-03 13:13:45

*Thread Reply:* Beautiful stuff!

aruna (arunas@mit.edu)
2021-06-03 13:13:50

*Thread Reply:* Where's the imagery from?

Ben Weinstein (benweinstein2010@gmail.com)
2021-06-03 13:35:46

*Thread Reply:* Antarctica, and Minnesota

aruna (arunas@mit.edu)
2021-06-03 14:10:10

*Thread Reply:* I see, is it an open dataset?

Ben Weinstein (benweinstein2010@gmail.com)
2021-06-03 14:10:25

*Thread Reply:* not yet. perhaps in the future. this is demo only.

👍 aruna
Ben Weinstein (benweinstein2010@gmail.com)
2021-06-03 12:56:47
👍 gvanhorn, Lily Xu, Sara Beery, Thijs, Barry Brook, David Will, Gyri Reiersen
🦉 Benjamin Hoffman, Sara Beery, Gyri Reiersen
:the_horns: Declan, Sara Beery, Mitch Fennell, Gyri Reiersen
🐦 Vienna Saccomanno, Tony Chang, Gyri Reiersen
Sara Beery (sbeery@caltech.edu)
2021-06-04 10:26:54

Symposium on ML for Biodiversity: https://poisotlab.github.io/ml-biodiv-symposium/

poisotlab.github.io
👍 Riccardo de Lutio, Ritwik, Omiros Pantazis, Jason Holmberg (Wild Me)
Ben Weinstein (benweinstein2010@gmail.com)
2021-06-07 12:25:14

@Sarra Alqahtani any update on cross-scale training? I'm finding that my bird detection object detection pipeline (see above) is incredibly sensitive to the scale of the input images. For example, changing the input size of one of the datasets from 1100 px crops to 1300 px crops changes performance by 60%. Everything gets resized to 224 x 224 as it goes into the network, so the input scale is just adjusting how large a bird appears in an image. We start with giant orthomosaic tiles, so they have to be cut into crops. It is this step that appears to be very sensitive. In blue is prediction, in orange is annotation. Smaller representations work better. Thoughts from others? @Tony Chang @Elijah Cole (Deactivated) @Benjamin Kellenberger @John Brandt. I already do heavy data augmentation to crop and resize to zoom in and out during training, but I guess it is not enough. Added wrinkle is that this is 'zero shot' background object detection, so we are testing on a novel dataset that is withheld from training (like using south africa to predict australia, etc). Each dataset has different ground resolution.

Sara Beery (sbeery@caltech.edu)
2021-06-07 13:07:18

*Thread Reply:* Have you tried more than those 2 resolutions? I'd be curious what the performance/input size curve looks like and whether there's any way to predict what resolution in your test data will do best based on your training data/model.

Ben Weinstein (benweinstein2010@gmail.com)
2021-06-08 16:47:36

*Thread Reply:* Just illustrating what the problem is. Here is with no data augmentation on a held-out dataset at new resolution. It makes mini ducks. Prediction in blue.

Ben Weinstein (benweinstein2010@gmail.com)
2021-06-08 16:47:54

*Thread Reply:* Working on even heavier zoom and cropping.

Sara Beery (sbeery@caltech.edu)
2021-06-08 16:48:21

*Thread Reply:* Very interested, following along for more updates 🙂

Ben Weinstein (benweinstein2010@gmail.com)
2021-06-14 19:37:08

if anyone would like to join, @Elijah Cole (Deactivated) and I are discussing his excellent paper tomorrow at 10PT on self-supervised learning https://arxiv.org/pdf/2105.05837.pdf. I am planning on a similar work for 80 species tree classification.

❤️ Sara Beery, Rebekah Loving
👍 gvanhorn, Ankita Shukla, Omiros Pantazis, Riccardo de Lutio
Elijah Cole (Deactivated) (ecole@caltech.edu)
2021-06-14 20:44:39

TLDR for the paper:

“When Does Contrastive Visual Representation Learning Work?” https://arxiv.org/abs/2105.05837

Recent self-supervised learning techniques like SimCLR are able to learn good representations for ImageNet classification without using any labels. Do these techniques still work if you care about datasets that are not similar to ImageNet?

In the paper, we run a bunch of experiments to figure out: • How many images you need for both pretraining and downstream supervision; • How important it is to pretrain on images relevant to your downstream task; • Whether you can pretrain on datasets with noisy/degraded images; and • How well these methods perform on challenging fine-grained tasks e.g. iNat21. Feel free to let me know if you have any questions!

arXiv.org
👍 Oisin Mac Aodha
❤️ Subhransu Maji
Elijah Cole (Deactivated) (ecole@caltech.edu)
2021-06-14 20:44:47

*Thread Reply:* Some of my favorite results: • The gap between supervised and self-supervised performance on state-of-the-art fine-grained datasets like iNat21 is huge (~30% top-1). • There isn’t much benefit to pretraining on more than 500k images. • Adding more pretraining images from different domains does not necessarily lead to more general representations.

Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2021-06-15 02:54:40

*Thread Reply:* Awesome! Seems to be a great work indeed; I’m looking forward to reading it. Thanks @Elijah Cole (Deactivated)!

🙂 Elijah Cole (Deactivated)
Sara Beery (sbeery@caltech.edu)
2021-06-16 11:13:40

From the CCAI Newsletter: ", the virtual conference of  (TDWG), which will be held 18–22 October 2021. Abstracts will be submitted to and published in  (BISS). https://climatechange.us3.list-manage.com/track/click?u=a5463f28627a77a4b2a79e7d0&id=4ddfd105b8&e=9746f73f39|Symposium on applications of machine learning in biodiversity image analysis to be organized. The deadline for the submission of abstracts is August 2"

tdwg.org
tdwg.org
biss.pensoft.net
tdwg.org
👍 Oisin Mac Aodha, Omiros Pantazis, Beckett Sterner, Nico Franz, Jason Holmberg (Wild Me), Atriya Sen, David Russell
Ritwik (rittyun@yahoo.com)
2021-06-17 04:24:42

Hi, I was wondering if someone has experience about dealing with watermarks in images while training a classifier. I'm using google images to build training data for certain specific species but many times there are watermarks on the images. The models will mostly likely be sensitive to the watermarks for classification. Has anyone encountered this issue?

Elijah Cole (Deactivated) (ecole@caltech.edu)
2021-06-17 09:46:08

*Thread Reply:* I don’t know how prevalent they are, but I know there are some images with various watermarks in standard datasets like ImageNet and Places365. People don’t seem to worry about them too much in practice.

@Sara Beery didn’t you look into this for camera trap images once?

Sara Beery (sbeery@caltech.edu)
2021-06-17 10:11:37

*Thread Reply:* A bit! camera traps frequently have these bars at the top and bottom that are distinct to camera type. I've experimented with cropping them out and with leaving them in and found it didn't make much of a difference.

Ritwik (rittyun@yahoo.com)
2021-06-17 10:40:38

*Thread Reply:* Thanks a lot for your inputs. I'm assuming if the frequency, appearance and location of the marks, are more or less equally distributed across classes it will not affect global evaluation metrics. Or if their frequency is so low that it is drowned in the standard error margin. In my case, I fear, they might end up being discerning features. Will try some experiments..

Sara Beery (sbeery@caltech.edu)
2021-06-17 10:46:16

*Thread Reply:* In my case, I think they are discerning features, but that there are a bunch of other discerning but correlative features the model can focus on just in the static backgrounds.

Sara Beery (sbeery@caltech.edu)
2021-06-17 10:46:31

*Thread Reply:* So with or without them, the model finds it easy to overfit

Ritwik (rittyun@yahoo.com)
2021-06-17 11:12:24

*Thread Reply:* true, a lot of overfitting opportunities for the model 🙂

Ben Augustine (ben.augustine@cornell.edu)
2021-06-17 12:07:27

*Thread Reply:* Interesting, Sara. I was concerned about these on camera trap images, but have just ignored them rather than try to crop them out. In my case, different camera brands are deployed at different sites and different sites have a somewhat different suite of species, so I thought this may increase overfitting. Irrelevant camera information on the photos correlates with species occupancy/prevalence.

👍 Sara Beery
Sara Beery (sbeery@caltech.edu)
2021-06-17 12:12:12

*Thread Reply:* yes, the bars correlate with species prevalence, but so does the entire background of the image and underlying hardware-specific image statistics? So I guess my point is that yes, models can focus on the bars but without the bars they still have a lot of opportunity to overfit - which is my hypothesis as to why removing the bars didn't do much

Ben Augustine (ben.augustine@cornell.edu)
2021-06-18 13:47:36

*Thread Reply:* Makes sense.

Sara Beery (sbeery@caltech.edu)
2021-07-15 10:06:00

*Thread Reply:* Ok, I take it back, if you add a project that only has images of one species, and it's a camera type that isn't represented previously, then yep. Massive overfitting to the logo. I think probably in previous experiments I was doing there was a lot more variability in species per camera type so it was less of a problem.

Ritwik (rittyun@yahoo.com)
2021-07-15 12:10:42

*Thread Reply:* thanks @Sara Beery makes sense.. if there's a constant signal for a particular category it will be picked up by filter weights.. i'll update this space once I have some curated data (long task)

👍 Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2021-06-18 14:21:10

@gvanhorn or @Holger Klinck, just a thought, you've probably already done this. I'm prepping for a trip to S. ecuador and been listening to local recordings. There are many recordings that lack 'additional' bird info metadata, but whose sonograms clearly show other species. Like this (bay wren?) embedded in this recording. Have you explored semi-supervised weak labels to try to draw info from these unlabeled, but high quality recordings?

gvanhorn (grv22@cornell.edu)
2021-06-18 14:22:01

Not specifically, but some of that info is captured in the checklist associated with the audio.

Ben Weinstein (benweinstein2010@gmail.com)
2021-06-18 14:24:24

*Thread Reply:* k, sorry if that was naive idea, but its interesting how often I feel like the unlabeled auxiliary species is actually better quality than the, often rarer, labeled, target species. Like this one where, i'm not sure the target is, but I hear glossy backed thrush? and a tapaculo?

gvanhorn (grv22@cornell.edu)
2021-06-18 14:25:05

*Thread Reply:* Yeah, welcome to audio!

😅 Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2021-06-18 14:25:34

*Thread Reply:* lol, sorry, didn't mean to :https://xkcd.com/1831/ i'll go back to images.

} xkcd (https://xkcd.com/)
😅 gvanhorn, Carly Batist, Sara Beery
gvanhorn (grv22@cornell.edu)
2021-06-18 14:25:47

*Thread Reply:* We’ve actually been hard at work on this problem! Stay tuned for some cool stuff rolling out over the next few weeks and months

👍 Ben Weinstein, Carly Batist, Sara Beery
gvanhorn (grv22@cornell.edu)
2021-06-18 14:23:47

And the recent work of @Elijah Cole (Deactivated) is relevant for this particular example (i.e multilabel problems where you only have 1 positive label for each training sample)

👍 Sara Beery, Oisin Mac Aodha
Elijah Cole (Deactivated) (ecole@caltech.edu)
2021-06-18 15:19:45

Here’s a link:

https://arxiv.org/abs/2106.09708

Happy to discuss!

arXiv.org
🙌 Sara Beery, Oisin Mac Aodha
👍 Mikey Tabak
Elijah Cole (Deactivated) (ecole@caltech.edu)
2021-07-15 10:48:29

*Thread Reply:* The code is now available as well if anyone wants to play around with it:

https://github.com/elijahcole/single-positive-multi-label

Website
<p><a href="https://arxiv.org/abs/2106.09708">https://arxiv.org/abs/2106.09708</a></p>
Stars
<p>9</p>
gvanhorn (grv22@cornell.edu)
2021-06-23 13:06:29

Super excited to announce the launch of Sound ID for Merlin. Simply hold your phone up, press record, and Merlin will help you ID who’s singing in real time! If you happen to be in the US or Canada, download it today and try it out in your own backyard. Over the next months (and years!) we’ll be expanding the feature to cover more species and regions. https://merlinbirdid.page.link/sound-id

merlinbirdid.page.link
😍 Sara Beery, Ben Weinstein, Talia Speaker, Declan, Beckett Sterner, Casey Youngflesh, Carly Batist, Stefan Schneider, Subhransu Maji, Lauren Gillespie, Justin Kay, Dan Morris, Riccardo de Lutio, Gyri Reiersen, Cody Kupferschmidt, Wethington Michael, Ayan Mukhopadhyay
🐦 Ed Miller, Thijs, Stefan Schneider, Sara Beery, Wethington Michael
😎 Jon Van Oast, Sara Beery, Ritwik, Wethington Michael
gvanhorn (grv22@cornell.edu)
2021-06-23 13:07:47

*Thread Reply:* @Ben Weinstein it doesn’t cover S. ecuador yet but we’ll get there soon!

Ben Weinstein (benweinstein2010@gmail.com)
2021-06-23 13:08:10

*Thread Reply:* awesome. I'm going outside to try it now.

Ben Weinstein (benweinstein2010@gmail.com)
2021-06-23 13:09:21

*Thread Reply:* i'm also trying to do my saw-whet call to see what happens.

🤓 Casey Youngflesh, Sara Beery
Sara Beery (sbeery@caltech.edu)
2021-06-23 13:10:34

*Thread Reply:* lol BENNNNN we're in a workshop 🤣

Ben Weinstein (benweinstein2010@gmail.com)
2021-06-23 13:10:44

*Thread Reply:* i'm already outside.

🤩 Sara Beery
Sara Beery (sbeery@caltech.edu)
2021-06-23 13:10:47

*Thread Reply:* (ie I want to go outside and try it too)

Ben Weinstein (benweinstein2010@gmail.com)
2021-06-23 13:16:30

*Thread Reply:* okay @gvanhorn I have some questions. First, amazing, congrats, truly a remarkable accomplishment.

😊 gvanhorn
Ben Weinstein (benweinstein2010@gmail.com)
2021-06-23 13:16:45

*Thread Reply:* Can I opt in to let Cornell save my recordings to improve future models?

➕ Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2021-06-23 13:17:08

*Thread Reply:* Could I annotate them here, like, select 'yes' or no for a given species?

Ben Weinstein (benweinstein2010@gmail.com)
2021-06-23 13:17:54

*Thread Reply:* like it was perfect on lesser goldfinch, but then tossed a hutton's vireo in there (it was the same call), you can imagine users contributing in a weak supervised way.

Ed Miller (ed@hypraptive.com)
2021-06-23 13:18:21

*Thread Reply:* @gvanhorn Is this feature of Merlin related to BirdNET?

Ben Weinstein (benweinstein2010@gmail.com)
2021-06-23 13:18:24

*Thread Reply:* but watching it click on a song sparrow (nice and close) is a warm and fuzzy feeling.

gvanhorn (grv22@cornell.edu)
2021-06-23 13:19:02

*Thread Reply:* Uploading recordings to the Macaulay Library will hopefully be available by this fall. I think it will also support uploading photographs, and could have tighter integration with eBird.

Ben Weinstein (benweinstein2010@gmail.com)
2021-06-23 13:19:26

*Thread Reply:* brilliant. The data gathering aspect of this is so valuable, like tesla watching you drive.

👍 Sara Beery, gvanhorn
Ben Weinstein (benweinstein2010@gmail.com)
2021-06-23 13:20:13

*Thread Reply:* it did a pretty good job at a distant set of species, i was impressed with the western scrub jay it grabbed from several houses over.

👍 gvanhorn
Ben Weinstein (benweinstein2010@gmail.com)
2021-06-23 13:21:03

*Thread Reply:* what was the key innovation that made this happen, $ for developers? More clean data? Algorithmic improvements?

gvanhorn (grv22@cornell.edu)
2021-06-23 13:21:16

*Thread Reply:* And then plenty of headroom for improved performance. One limitation that I’ve been trying to work around within Merlin is that taxonomic predictions isn’t well supported by the app UI. So we’ll hopefully keep improving upon egregious mistakes, but then more fine-grained mistakes will require a bigger UI update that let’s me return taxonomic probabilities.

gvanhorn (grv22@cornell.edu)
2021-06-23 13:23:49

*Thread Reply:* Sound ID benefitted from some fore-runners like BirdNet and BirdVox (@Ed Miller this sort of answers your question), so that got us up and running pretty quickly. Then it was experience with dataset building and working with experts to refine the annotations that got us to a point where we were reasonably happy with the live experience. Certainly the experience with Seek helped out here

👍 Sara Beery
gvanhorn (grv22@cornell.edu)
2021-06-23 13:27:13

*Thread Reply:* @Ben Weinstein I guess the answer is: right team and the right data.

Ben Weinstein (benweinstein2010@gmail.com)
2021-06-23 13:33:10

*Thread Reply:* I think that's a good answer. You didn't say, we need new and interesting and complex models.

Ben Weinstein (benweinstein2010@gmail.com)
2021-06-23 13:33:33

*Thread Reply:* when I give talks I always ask, do we need new models, new theory or new data.

Ben Weinstein (benweinstein2010@gmail.com)
2021-06-23 13:33:40

*Thread Reply:* and clearly all three are nice.

Ben Weinstein (benweinstein2010@gmail.com)
2021-06-23 13:33:49

*Thread Reply:* but where should we focus/

Sara Beery (sbeery@caltech.edu)
2021-06-23 13:42:12

*Thread Reply:* I feel like Grant in particular is really fantastic at curating accurate labeled data with experts in the loop, and not just the data collection but getting these difficult labels to be accurate was key.

😊 gvanhorn
gvanhorn (grv22@cornell.edu)
2021-06-23 13:53:42

*Thread Reply:* A teaser blog post (about future blog posts that will have more details….) can be found here: https://www.macaulaylibrary.org/2021/06/22/behind-the-scenes-of-sound-id-in-merlin/

macaulaylibrary.org
❤️ Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2021-06-23 15:42:41

*Thread Reply:* @gvanhorn if there is a way for me to demo a pack for the tropics when I go, happy to give feedback. Not sure if that's feasible/useful.

gvanhorn (grv22@cornell.edu)
2021-06-23 16:02:21

*Thread Reply:* It is possible, you can join the Merlin Beta here: https://merlin.allaboutbirds.org/beta/ The Beta testers will get the upcoming packs first, but currently the model in the Beta and the Production app are the same.

Merlin Bird ID - Free, instant bird identification help and guide for thousands of birds
👍 Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2021-06-28 15:49:42

*Thread Reply:* i've been playing with this a bit more, really interesting. Is there a long term interest in making an API so that scientists can query recordings against the model?

🤞 Sara Beery
gvanhorn (grv22@cornell.edu)
2021-06-29 12:36:08

*Thread Reply:* Good idea! Yes and no. Yes: definitely interested and I can see the benefit it would provide to scientists. No: I don’t have the funds queued up to support that kind of API, although I don’t think it would cost too much run a bare bones / no fuss api. But we’ll see how things shake out this summer.

👍 Ben Weinstein
gvanhorn (grv22@cornell.edu)
2021-06-23 13:36:27

Pro tip for folks that want to squeeze more out of the Sound ID model: pickup an external mic! We’ll keep optimizing the model for built-in microphones, but a pretty big boost in average precision can be achieved by just using a better mic. I’ve been playing around with the following setup: Rode VideoMic GO, TRS to TRRS cable (male to male), and the head phone jack dongle for an iPhone.

👍 Sara Beery
Ed Miller (ed@hypraptive.com)
2021-06-24 21:30:16

Is anyone attending the CV4Animals Workshop at virtual CVPR tomorrow? If so, drop by the BearID poster (Paper ID 32) and say "hi'!

❤️ Sara Beery, Elizabeth Bondi, Ankita Shukla
🎉 Jon Van Oast
:bearid: Omiros Pantazis, Carly Batist
Sara Beery (sbeery@caltech.edu)
2021-06-24 21:31:37

*Thread Reply:* I'll be there with @Elizabeth Bondi!!

🎉 Elizabeth Bondi, Jon Van Oast
Ed Miller (ed@hypraptive.com)
2021-06-24 21:34:03

*Thread Reply:* I've never done a poster session. Let alone a virtual session on Gatherly! 😧

Elizabeth Bondi (ebondi@g.harvard.edu)
2021-06-24 21:56:30

*Thread Reply:* We aren't totally sure what to expect either, but it should be fun 🙂

❤️ Sara Beery
:bearid: Ed Miller
Silvia Zuffi (silvia@mi.imati.cnr.it)
2021-06-25 06:06:18

*Thread Reply:* I will also attend, no idea how this virtual poster session could go, let’s see!

❤️ Sara Beery, Elizabeth Bondi
:bearid: Ed Miller
Sara Beery (sbeery@caltech.edu)
2021-06-25 16:25:33

*Thread Reply:* FANTASTIC talk @Silvia Zuffi!!!

🎉 Elizabeth Bondi
👏 Ed Miller
Silvia Zuffi (silvia@mi.imati.cnr.it)
2021-06-25 16:44:20

*Thread Reply:* Thanks!!

Yuval Boss (yuval@yuvalboss.com)
2021-06-25 14:02:56

Hi Everyone,

I work for NOAA's Polar Ecosystems Program on aerial object detection/classification of ice-associated seal species and polar bears. I also collaborate with other projects here in the Marine Mammal Lab trying to implement ML for different detection/classification/tracking tasks in aerial or satellite image data. All the work on my end leads to aiding researchers in population assessment studies.

Some of the interesting areas of my work have to do with the challenges of:

  • small object detection in real-time
  • challenges with class imbalances and small number of labels for some classes(200)
  • how the results of these models can be used by statisticians in their assessment/modeling of abundance
  • multimodal approaches for localization/classification in aligned thermal/color(and soon UV) imagery.
  • object tracking across frames
  • image quality assessment

Here is a short article from a year ago sharing what our project is doing at a high level: fisheries.noaa.gov/feature-story/developing-artificial-intelligence-find-ice-seals-and-polar-bears-sky

My background is in CS and would love to collaborate more with other developers working on similar problems on the technical side. Please reach out if you are working on similar problems and want to chat about them!

Thank you Sara for setting this up, this community seems awesome from what I've seen so far and I'm really glad to be a part of it.

Yuval Boss - yuval.boss@noaa.gov

NOAA
👋 gvanhorn, Stefan Schneider, Sara Beery, Declan, Ben Weinstein, Lloyd Hughes, Dan Morris, Medhini Gulganjalli Narasimhan, Gavin Kyte, David Russell
🐻‍❄️ Stefan Schneider, Carly Batist, Sara Beery, Jason Holmberg (Wild Me), Gavin Kyte
🦭 Carly Batist, Sara Beery, Caleb Robinson, Jason Holmberg (Wild Me), Tony Chang
👍 Benjamin Kellenberger, Jon Van Oast, Caleb Robinson, Jason Holmberg (Wild Me)
Gyri Reiersen (gyri.reiersen@tum.de)
2021-07-01 04:56:02

Came across this project: https://seabee.no/ Using drones and AI to monitor and map the biodiversity of the Norwegian coastal zone, specifically used for bird counting. Super cool!

seabee.no
Estimated reading time
10 minutes
🐥 Sara Beery, Jason Holmberg (Wild Me), David
Carly Batist (cbatist@gradcenter.cuny.edu)
2021-07-01 12:08:30

Cool AI initiative I came across - http://www.namethatfish.com/

Name That Fish
❤️ Sara Beery, Jason Holmberg (Wild Me), Gracie Ermi
👍 Justin Kay, Jason Holmberg (Wild Me)
Ben Weinstein (benweinstein2010@gmail.com)
2021-07-01 13:22:10

*Thread Reply:* any idea who is behind this? No contact on the website? did i miss it?

Holger Klinck (hk829@cornell.edu)
2021-07-01 13:25:43
Ben Weinstein (benweinstein2010@gmail.com)
2021-07-01 13:26:21

*Thread Reply:* funny, that's my institution, never heard of it.

Mike C (mike@mikecee.solutions)
2021-07-06 13:39:28

Can our datasets be of value to you? A few project updates

The University of Hong Kong has just developed an app called ‘Saving Face’ for i-id of napoleon wrasse:

https://onlinelibrary.wiley.com/doi/abs/10.1002/aqc.3199 https://www.scmp.com/magazines/post-magazine/short-reads/article/3134123/facial-recognition-app-used-protect-endangered

The team has developed other ML applications in conservation such as: https://www.clearbot.org/ https://openoceancam.com/

South China Morning Post
Clearbot
Ben Weinstein (benweinstein2010@gmail.com)
2021-07-08 13:00:35

I had a researcher send me annotations in photoshop. Any one want to guess about how I might try to extract them. Some of sort of export vector graphics as svg?

🙀 Sara Beery
Jes Lefcourt (jesl@vulcan.com)
2021-07-08 13:15:28

*Thread Reply:* Yep. If you've got the PSD file, exporting to SVG and then parsing shouldn't be too hard.

Ben Weinstein (benweinstein2010@gmail.com)
2021-07-08 13:16:10

*Thread Reply:* thanks. i'll update here. Right now i've got the JPG, but requesting the original.

Ben Weinstein (benweinstein2010@gmail.com)
2021-07-08 14:51:30

I made a tiny clip on annotating images in qgis if its useful to others. This was a small addition to the meeting we had with @Vienna Saccomanno @Benjamin Kellenberger @John Payne https://deepforest.readthedocs.io/en/latest/bird_detector.html

👍 Sara Beery, David Will
Jonathan Crall (erotemic@gmail.com)
2021-07-09 14:33:20

@Ben Weinstein I've done something similar before (I think on that same - or at least a similar dataset.

🎉 Jon Van Oast, Sara Beery
Jonathan Crall (erotemic@gmail.com)
2021-07-09 14:33:21

https://gitlab.kitware.com/viame/bioharn/-/blob/master/dev/data_tools/read_ps_count_jpeg.py

GitLab
Jonathan Crall (erotemic@gmail.com)
2021-07-09 14:36:43

Look at the `parse_photoshop_count_annots function, photoshop literally has a special annotation layer. In my case it was dots, but you might have circles. Figuring all this out took me way too much time, hopefully my initial legwork helps you out.

Jon Van Oast (jon@wildme.org)
2021-07-09 14:48:56

slightly off-topic, but i often think about putting the annotation data we have in an image as part of a comment in jpeg. is it crazy to imagine trying to develop a "standard" for this? (e.g. some kind of flexible chunk of json -- maybe stored as a jpeg segment or exif MakerNote )

Jonathan Crall (erotemic@gmail.com)
2021-07-09 15:05:20

Hey Jon 🙂, I think storing annotation data inside the image itself it probably not a great idea. But I do think existing annotation formats are lacking. I do like the general idea behind MS-COCO because it provides a relatively intuitive way to specify annotations, and I've been working on an extension of it called KW-COCO, which adds support for images-sequences, multispectral imagery, and richer annotations (like polygons with holes):

https://gitlab.kitware.com/computer-vision/kwcoco

GitLab
Jonathan Crall (erotemic@gmail.com)
2021-07-09 15:08:50

I also saw there is a CameraTraps coco format: https://github.com/microsoft/CameraTraps/tree/master/data_management

There is mostly a 1-to-1 conversion between the extensions there and the extensions in the kwcoco spec. (e.g. "seqid" -> "videoid", "framenum" -> "frameindex")

👍 Sara Beery
👋 Jon Van Oast
😎 Jon Van Oast, David Russell
Jon Van Oast (jon@wildme.org)
2021-07-09 15:11:09

yeah, it feels a bit hacky - especially for large datasets where it makes way more sense to have an external set of annotation data. but as a "why not?" kinda extra layer, might be kinda fun. 🙂 thanks much for the pointers on just the general topic of format of data!

Jonathan Crall (erotemic@gmail.com)
2021-07-09 15:13:11

The kwcoco CLI has a command that is supposed to help grab data and convert it into kwcoco. Currently it support for cifar, domainnet, sapcenet7, and camvid, but I'm looking into adding datasets from lila.science. I also recently found out about a library that scikit-image uses for fetching data files called "pooch": https://github.com/fatiando/pooch . Ultimately, it would be really cool to have an IPFS solution for all of these public / benchmark / challenge datasets.

Website
<p><a href="https://www.fatiando.org/pooch">https://www.fatiando.org/pooch</a></p>
Stars
<p>125</p>
Jon Van Oast (jon@wildme.org)
2021-07-09 15:14:32

i am not sure if there is a strict definition of "annotation" that i am breaking here, but i wonder if this is adaptable to some of the types of features we have historically been interested in, like edges (fluke / fin) and sets of points.

Jonathan Crall (erotemic@gmail.com)
2021-07-09 15:17:46

kwcoco does have support for keypoints, and in general you can add any extra data you want, as long as there is some bounding box to localize where the annotation roughly is. At one point I did have line annotations for scallop radii, but I dropped them in favor of converting them to a polygon. The fluke use-case is a good example of a different sort of geometry that really should be supported and doesn't fit in cleanly with the existing keypoint / box / polygon paradigms.

👍 Sara Beery
Mike C (mike@mikecee.solutions)
2021-07-10 10:57:30

Notion’s ‘Sessions’ will be available free to all Beta users who join and complete below checklist once Sessions launches July 12 2021 🙂

https://www.notion.so/Sessions-is-forever-free-for-its-active-beta-users-a7f6502c63eb40009add29cf14b03d09

Sessions Community on Notion
Eric Greenlee (Eric.greenlee.96@gmail.com)
2021-07-12 09:42:18

Hi everyone! Thanks to @Lily Xu for adding me to this group. I'm an EE looking to work on problems in conservation and am hoping for recommendations for grad programs (or companies). I've worked as an RF engineer for the past 3 years designing a variety of wireless systems and ideally want to continue doing this kind of work, but I'm open to other hardware projects. I've had trouble finding PhD programs that apply engineering to problems in conservation (rather than doing longer-term EE research) and was hoping you software people might know of cool hardware things in the works. Also open to jobs in private industry. Thanks in advance!

👋 Lily Xu, Sara Beery, Elizabeth Bondi, Jason Holmberg (Wild Me), Mike C
Lily Xu (lily_xu@g.harvard.edu)
2021-07-12 09:43:08

*Thread Reply:* Welcome Eric!!

❤️ Eric Greenlee
Sara Beery (sbeery@caltech.edu)
2021-07-12 09:51:47

*Thread Reply:* Welcome!! One way you can do this, and I think what a lot of folks here have done, is find a PhD advisor who will support this kind of interdisciplinary work within a traditional EE/CS type program

Sara Beery (sbeery@caltech.edu)
2021-07-12 10:02:36

*Thread Reply:* For example, @Elijah Cole (Deactivated), @gvanhorn and I all did/are doing our PhD's with @Pietro Perona (who is the best, thanks Pietro!!) who is supportive of our work on different aspects of real-world problems in the natural world and other social good domains like medicine. Milind Tambe advises @Lily Xu and @Elizabeth Bondi and has a similar passion for work that has positive societal impact. If you're more on the hardware side one person you could reach out to to get a better sense of the lay of the land might be Andrew Schultz, who is a PhD student in MechE at Georgia Tech and does awesome conservation-focused research and education (https://streamerlinks.com/StreamingScience)

❤️ Lily Xu, Elizabeth Bondi, aruna, Eric Greenlee
Suzanne Stathatos (suzanne.stathatos@gmail.com)
2021-07-12 18:21:55

*Thread Reply:* Hi Eric! I am just starting the transition from working as a software engineer to becoming a PhD student (with Pietro 🎉 !). The process of finding programs and debating PhD vs. private industry work is very fresh in my mind. While not entirely up to date, this site has a number of researchers in this intersection. Alternatively, in the private industry, the Work On Climate slack channel is a good place to lurk for companies trying to solve climate issues (not conservation-focused, but there could be some overlap in interests).

workonclimate.org
🙌 Sara Beery, Lily Xu
❤️ Eric Greenlee
Eric Greenlee (Eric.greenlee.96@gmail.com)
2021-07-13 09:55:52

*Thread Reply:* Thank you so much @Sara Beery and @Suzanne Stathatos! All your resources are fantastic, and CompSustNet in particular looks promising!

❤️ Sara Beery, Suzanne Stathatos, Lily Xu
Mike C (mike@mikecee.solutions)
2021-07-18 05:28:51

*Thread Reply:* Welcome @Eric Greenlee - I believe @Ben Seleb is doing a graduate project in conservation hardware. Worth giving him a shout.

Also, Shah Selbe is into field IOT hardware. Maybe his work can give you some leads

https://medium.com/conservify/bringing-conservation-technology-to-life-eadd93bfa3af

https://medium.com/conservify

Medium
Reading time
7 min read
Medium
❤️ Eric Greenlee, Sara Beery
😊 Lily Xu
Eric Greenlee (Eric.greenlee.96@gmail.com)
2021-07-18 08:29:51

*Thread Reply:* Thanks @Mike C! I reached out to Ben and I actually to Shah 6 months ago about FieldKit

Stefan Schneider (sschne01@uoguelph.ca)
2021-07-12 12:16:10

Hi Everyone! Would anyone happen to know of a public/available insect/arthropod camera trap dataset? Something with multiple instances of species/individuals on a semi-standardized background. Akin to something like this:

Sara Beery (sbeery@caltech.edu)
2021-07-12 12:17:36

*Thread Reply:* I don't know of a public one, but I was JUST talking to someone who is interested in this on Friday for Ag applications (detecting invasives)

Stefan Schneider (sschne01@uoguelph.ca)
2021-07-12 12:18:57

*Thread Reply:* yeah! That's pretty much the exact same idea. If something comes through the grapevine I'd love to hear about it

👍 Sara Beery
Sara Beery (sbeery@caltech.edu)
2021-07-12 12:26:18

*Thread Reply:* @Chandra Krintz and @rich wolski this is similar to what we were talking about Friday, right? Do you guys know of any public datasets?

Ben Weinstein (benweinstein2010@gmail.com)
2021-07-12 12:28:34

*Thread Reply:* let me know too, I have collaborators in switzerland working on microscope images.

Ben Weinstein (benweinstein2010@gmail.com)
2021-07-12 12:28:51

*Thread Reply:* I don't anything annotated and ready to share yet, but I'll update (probably be like a year)

Ben Weinstein (benweinstein2010@gmail.com)
2021-07-12 12:29:43

*Thread Reply:* you should email https://www.luerig.net/

luerig.net
👍 Sara Beery, Stefan Schneider
Stefan Schneider (sschne01@uoguelph.ca)
2021-07-12 13:40:24

*Thread Reply:* I'll do that! Thanks 🙂 The dataset doesn't actually have to be labeled

rich wolski (rich@cs.ucsb.edu)
2021-07-12 14:17:33

*Thread Reply:* @Sara Beery @Stefan Schneider @Ben Weinstein Hi All. Sadly, while we know of a research team that is working on insect identification I don't think they have made an image data set public. We will check to see if they have an interest in doing so.

Jonathan Crall (erotemic@gmail.com)
2021-07-13 10:33:41

I know of this honeybee tracking dataset: https://zenodo.org/record/4400651#.YO2kE3VKi54 which corresponds to the paper/code described on this github page: https://github.com/vladan-stojnic/Detection-of-Small-Flying-Objects-in-UAV-Videos

Stars
<p>15</p>
Language
<p>Python</p>
🐝 Stefan Schneider, Jonathan Crall
Stefan Schneider (sschne01@uoguelph.ca)
2021-07-13 10:35:46

*Thread Reply:* thanks for pointing this one out!

Chris Yeh (chrisyeh96@gmail.com)
2021-07-21 01:52:40

Came across this postdoc opportunity which might be interesting for certain members in this Slack: https://earth.stanford.edu/job/two-postdoctoral-positions-stanford-university-remote-sensing-and-georeferencing-global-methane

earth.stanford.edu
❤️ Sara Beery, Lily Xu, Siyu Yang
🌲 Suzanne Stathatos
Ayan Mukhopadhyay (ayanmukg@gmail.com)
2021-07-24 06:09:50

Hey everyone! Happy to be a part of this community. I wanted to share a project we are doing about conserving monarch butterflies. These tiny creatures travel over 3000 miles every year to their overwintering sites in Mexico (and some head to California). As monarchs stay in extremely dense clusters, it is very difficult to count them. Currently, people simply estimate size of the population every year by looking at the total hectares occupied. Unfortunately, this can be extremely error-prone. We collected images of monarchs and showed how simple crowd counting techniques using deep nets can be used to accurately estimate counts of butterflies in dense clusters. Read the paper here: https://www.biorxiv.org/content/10.1101/2021.07.23.453502v1 Our dataset is open for anyone to use so if you are interested, please reach out. Also, if you know anyone in CA or Mexico who might be interested in trying this out in the field, please let us know and we will be happy to help!

bioRxiv
❤️ Sara Beery, Emilio Luz-Ricca, Armin Bazarjani, Lily Xu, Barry Brook, David Russell
👍 Dan Morris, Mikey Tabak
Barry Brook (barry.brook@utas.edu.au)
2021-07-27 07:38:41

@Sara Beery Your new paper on SDMs for ML practitioners (https://arxiv.org/pdf/2107.10400.pdf) is excellent!

❤️ Sara Beery, Kevin Winner, Vienna Saccomanno, Elijah Cole (Deactivated), Cody Kupferschmidt, Arjun Subramonian (they/them), Jason Holmberg (Wild Me)
👏 gvanhorn, Nico Lang, Riccardo de Lutio, Talia Speaker, Ayan Mukhopadhyay, Beckett Sterner, Declan, Emilio Luz-Ricca, Mitch Fennell, Armin Bazarjani, Lily Xu, Mikey Tabak
Sara Beery (sbeery@caltech.edu)
2021-07-27 07:44:02

*Thread Reply:* Thank you!!! This was a joint effort with @Elijah Cole (Deactivated) and was wonderfully advised by @Kevin Winner, I learned a TON while writing it 🙂

❤️ Barry Brook, Kevin Winner
David Rolnick (dsrolnick@gmail.com)
2021-07-28 21:43:03

Do you know about an exciting project involving AI & climate change? Climate Change AI and the Centre for AI & Climate are soliciting case studies for a report by the Global Partnership on AI (GPAI) to be launched alongside COP26. We are most interested in projects that have either been deployed or are aimed towards deployment.

Examples of what case studies might look like:

  • A company using AI to increase the energy efficiency of factories.
  • An NGO tracking deforestation using AI.
  • A research team incorporating AI in climate modeling.
  • A governmental agency using AI for disaster response.

 Suggest projects by August 12 at: https://www.climatechange.ai/gpai-case-studies

👏 Sara Beery
👍 Carly Batist, Bistra Dilkina
Barry Brook (barry.brook@utas.edu.au)
2021-07-31 21:56:17

@Siyu Yang I enjoyed your recent presentation and Q&A on the MegaDetector! Very useful. https://youtu.be/LUkQVARAVFI

YouTube
} WILDLABS.NET (https://www.youtube.com/channel/UCrxw8iiyFalKHFNAhZYCAYA)
❤️ Sara Beery, Olivier Gimenez, Jason Holmberg (Wild Me), Talia Speaker
Siyu Yang (yasiyu@microsoft.com)
2021-08-02 16:14:03

*Thread Reply:* Thanks! Glad people find it helpful

Petar Gyurov (pgyurov93@gmail.com)
2021-08-02 12:23:44

Hi guys. Does anyone where I can find any datasets of polar bear images? Cheers 🐻‍❄️

Alasdair Davies (alasdair@shuttleworthfoundation.org)
2021-08-03 06:49:51

*Thread Reply:* Only thermal from this project (if useful)

Yuval Boss (yuval@yuvalboss.com)
2021-08-09 15:06:11

*Thread Reply:* http://lila.science/datasets/arcticseals and http://lila.science/datasets/noaa-arctic-seals-2019/. I'm not certain if both of these have polar bears, the first one should but unfortunately not many and you would need to download a lot more data to access them. Send me a message and I can package up some polar bear data seperately. Also these are aerial images so no close ups of polar bears. Would love to hear what you are working on if you want to share! 🙂

LILA BC
Written by
lilawp
Est. reading time
2 minutes
LILA BC
Written by
lilawp
Est. reading time
2 minutes
Petar Gyurov (pgyurov93@gmail.com)
2021-08-09 16:40:14

*Thread Reply:* @Yuval Boss Ah, sounds great but unfortunately aerial photos won't do. I am working on face detection and recognition so need some close up shots! Similar work to the BearID project. I did find this dataset which contained ~900 images of polar bears so that's a good starting point 🙂

cvml.ist.ac.at
Yuval Boss (yuval@yuvalboss.com)
2021-08-09 17:04:34

*Thread Reply:* Awesome! If you come across any aerial polar bear data don't hesitate to share 🙂 we only have a few hundred examples and could really benefit from more

👍 Petar Gyurov
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2021-08-02 12:58:07

Professor Jane Waterman at the University of Manitoba did some whiskerprint photo ID work for polar bears. May have good imagery.

👍 Sara Beery, Petar Gyurov, Jon Van Oast
Petar Gyurov (pgyurov93@gmail.com)
2021-08-02 14:24:09

*Thread Reply:* Thank you, I will get in touch!

Ben Weinstein (benweinstein2010@gmail.com)
2021-08-02 15:29:21

dominique chabot wrote a paper that I declined to review in the last few months on scraping web images for polar bears, you can ask him dominique.chabot@mail.mcgill.ca

👍 Petar Gyurov, Sara Beery
Remi Gonety (gonetyremi@outlook.com)
2021-08-05 14:33:30

Hey everyone, I am new here 👋:skintone5:. Thanks to @Dan Morris for inviting me. I will be starting a master in data science at the University of Edinburgh. I have a background in environmental engineering, and I am interested in water and conservation. I am excited to contribute and learn from you all.

👍 Oisin Mac Aodha, Benjamin Kellenberger, Jason Holmberg (Wild Me), Sara Beery
👋 Jon Van Oast, Daniel Grzenda, Declan, Jason Holmberg (Wild Me), Sara Beery, Stefan Schneider, Ed Miller, Lily Xu
Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2021-08-05 15:16:59

*Thread Reply:* Awesome! Welcome to absolutely beautiful old Dunedin! 😄

😁 Remi Gonety
Remi Gonety (gonetyremi@outlook.com)
2021-08-05 19:01:55

*Thread Reply:* thanks

HM van Zyl (senseivzyl@gmail.com)
2021-08-09 13:45:58

Hey all! Thanks @Sara Beery for the invite.

Quick Intro,

Henk van Zyl, from Cape Town, South Africa. I’m the Cofounder of Symbyte, where we help companies deploy useful AI. Cool things we’ve worked on : Active Shooter Detection and Triangulation, and saving youth from a life of Gangsterism. Our bread and butter is mostly MLops and cloud and data engineering.

I'm super interested in this field and would love to volunteer some time if anyone ever needs a grunt to help.

Cheers

👍 Jason Holmberg (Wild Me), Omiros Pantazis, Sara Beery, Lily Xu
Emmanuel Dufourq (edufourq@gmail.com)
2021-08-19 12:24:30

*Thread Reply:* hey @HM van Zyl - also from CT here! Are you interested in collaborations with academics? I've recently joined stellenbosch uni, let me know if you'd be keen and there should be a new cohort of MSc students in engineering looking for projects soon

😍 Sara Beery
HM van Zyl (senseivzyl@gmail.com)
2021-08-19 15:25:28

*Thread Reply:* Hey Emmanuel! I'd love to do a quick meet. I'll pm you my email and number then we can get started! Perhaps someone in the channel could give us an idea of something cheap and dirty we could attempt. I think the Stellenbosch campus is ripe for some bird monitoring! I'll take this over to random.

Emmanuel Dufourq (edufourq@gmail.com)
2021-08-20 03:13:07

*Thread Reply:* I have tons of ideas, literally a whole list of potential MSc topics! 🙂 So yes, I think collaborating with the zoological department at SU and the data science school which I work in would be a great 3-way collaboration. I have a bunch of audiomoths and field recorders. So yes, please do PM me your details

❤️ Sara Beery
Daniel Davila (daniel.davila@kitware.com)
2021-08-09 13:52:54

Hello, I just got back from an excellent talk given by @Sara Beery on biodiversity model robustification (totally a word). Im Dan, I work at Kitware which is an open source scientific computing company with a huge focus on AI/ML systems. I have a background in deployed multiplatform, multisensor CV systems for environmental monitoring. It looks like Jon Crall has been contributing here for a bit, brilliant colleague of mine here at Kitware. Also looks like some of our ecological work has been featured, thanks to @Yuval Boss , we are heavily involved in the NOAA seal detection work among other things. Nice to meet yall!

👋 Sara Beery, Jason Holmberg (Wild Me), HM van Zyl, Omiros Pantazis, Jon Van Oast, Jan Kees, Ed Miller, Talia Speaker, Lily Xu
Frederic Fol Leymarie (ffl@dynaikon.com)
2021-08-10 06:14:14

Hi; I too joined after the excellent interactive presentation by @Sara Beery yesterday. I work with DynAIkon which focuses on AI solutions for the camera trap world. We are part of a large EU consortium which covers different facets of conservation (focused on observations of plants and animals) and citizen science : https://cos4cloud-eosc.eu/

Cos4cloud
👋 Kai Waddington, Sara Beery, Ross Gardiner, Talia Speaker, Lily Xu
Emily Charry Tissier (hello@whaleseeker.com)
2021-08-12 14:59:58

Hello! Our group (Whale Seeker) detects marine mammals from imagery using AI. We are trying to create educative and thoughtful content about AI so non-experts can better understand the how and why. We frequently come across the attitude from possible clients that “AI is too complicated and expensive so we’ll substitute crowdsourcing/citizen science instead.” We wrote a blog on this topic that we’d love your thoughts on if you have time. https://www.whaleseeker.com/blog/widening-the-bottleneck-can-citizen-science-accelerate-conservation. Thank you in advance!

Whale seeker
❤️ Sara Beery, aruna, Carly Batist, Malcolm Kennedy, Rebekah Loving
😎 Jon Van Oast, Mitch Fennell, Omiros Pantazis
👏 Ștefan Istrate, Lily Xu
🐳 Cameron Trotter, Mike C
👀 Cameron Trotter
Mike C (mike@mikecee.solutions)
2021-08-13 12:33:52

*Thread Reply:* Love your blog posts. I would love to share a summary version linking to the full article (backlinked and attributed to you) on the Open Ocean Camera Medium publication - can we get permission to? 🙂 @Emily Charry Tissier

👍 Emily Charry Tissier, Malcolm Kennedy
Emily Charry Tissier (hello@whaleseeker.com)
2021-08-13 12:39:31

*Thread Reply:* Absolutely!

👍 Malcolm Kennedy
Mike C (mike@mikecee.solutions)
2021-08-13 12:34:48

Stumbled upon the ARM AI tools catalogue while browsing ConservationX. Might be a resource for those looking for tools, and an advertising channel for those looking to spread word about their solutions?

https://www.arm.com/why-arm/partner-ecosystem/ai-ecosystem-catalog/conservation-x-labs

Arm | The Architecture for the Digital World
👍 Ed Miller
Ed Miller (ed@hypraptive.com)
2021-08-13 20:48:36

*Thread Reply:* I work for Arm. Feel free to message me if you have any questions!

❤️ Jason Holmberg (Wild Me)
Caleb Robinson (calebrob6@gmail.com)
2021-08-14 22:44:27

Awesome to see the work on tree crown detection from @Rebekah Loving @Arushi Agarwal at KDDs Fragile Earth workshop! ("A network fusion model pipeline for multi-modal, deep learning for tree crown detection" - https://ai4good.org/fragile-earth-2021/)

🎉 Jon Van Oast, Elijah Cole (Deactivated), Ben Weinstein, Justin Kay, Aaron Ferber, Sara Beery, Lily Xu, Björn Lütjens
😍 Sara Beery
Marc Grimson (mg2425@cornell.edu)
2021-08-15 17:23:56

Hi everyone! I'm relatively new here to the Slack and recently started my PhD in CS and I'm really excited to work on problems in Sustainability and Conservation. I'm coming from a number of years as a software engineer where unfortunately there wasn't much ML. I'm hoping to improve on my ML knowledge, and was wondering if anyone knew of any good intermediate level online resources for ML, or if there were any interesting datasets in Conservation that would be relatively easy to play around with and develop models on for practice. Thanks in advance!

👋 Sara Beery, Nico Franz, Chris Yeh, Lily Xu, Björn Lütjens
Sara Beery (sbeery@caltech.edu)
2021-08-16 10:40:39

*Thread Reply:* There are a lot of accessible datasets/challenges on kaggle via the FGVC workshop competitions! https://sites.google.com/view/fgvc8/competitions

sites.google.com
George Ore (gore@caltech.edu)
2021-08-17 01:30:19

Hello everyone, I am an incoming college freshman assigned to check out your cool work. I hope I can learn a lot from everyone and get some essential skills under my belt.

👋 Declan, Sara Beery, Suzanne Stathatos, Alex Borowicz, Carly Batist, Emmanuel Dufourq, Lily Xu
Carly Batist (cbatist@gradcenter.cuny.edu)
2021-08-20 13:22:50

Having wished for a long time that there was a central database-type thing for #tech4wildlife, @Gracie Ermi & I created it ourselves! Here’s a directory of conservation tech org’s, companies, collabs, projects, etc. so the community has a centralized place to search for resources --> https://sites.google.com/view/conservation-tech-directory.

We’ve posted more info on Twitter if you want to check that out. There’s a link there to a form where you can suggest a resource if you know of something that belongs here but currently isn’t! Hope it can be a useful resource for you all. If you have any questions or suggestions, do feel free to reach out to us 🙂

sites.google.com
😎 Jon Van Oast, Thijs, Howard L Frederick, HM van Zyl, Ted Schmitt, Jason Holmberg (Wild Me), Ritwik, Björn Lütjens
🎉 Gracie Ermi, aruna, Emilio Luz-Ricca, Lily Xu, HM van Zyl, Jason Holmberg (Wild Me), Talia Speaker, Tarun, Rebekah Loving, Björn Lütjens
❤️ Justin Kay, Howard L Frederick, HM van Zyl, Jason Holmberg (Wild Me), Ankita Shukla, Suzanne Stathatos, Eric Greenlee, Yuval Boss
Monty Ammar (montyx23@gmail.com)
2021-12-21 17:34:07

*Thread Reply:* Hey Carly this sounds brilliant! The link does not work for me for some reason though. Wondering if the link is deprecated or whether there is a new one at all? Many thanks

Gracie Ermi (gracieermiifthen@gmail.com)
2021-12-21 17:47:38

*Thread Reply:* Hi @Monty Ammar! The directory now lives at https://conservationtech.directory

conservationtech.directory
👍 Carly Batist
Monty Ammar (montyx23@gmail.com)
2021-12-21 18:45:10

*Thread Reply:* Hey, thank you very much! ☺️

❤️ Carly Batist
Greg Lipstein (greg@drivendata.org)
2021-09-09 15:33:02

Hi all! We at DrivenData just launch a new machine learning competition for wildlife depth estimation in camera trap videos. It features a great dataset of labeled #camera_traps videos from MPI-EVA and Wild Chimpanzee Foundation researchers.

We'd love for you to participate and help spread the word with anyone you think might be interested! Thanks!!

DrivenData
🎉 Sara Beery, Oisin Mac Aodha, Ixchel Meza
👍 Talia Speaker, Ștefan Istrate
😎 Jon Van Oast, Suhail Alnahari
🐒 Stefan Schneider, Carly Batist
:thumbsup_all: Frederic Fol Leymarie
Sara Beery (sbeery@caltech.edu)
2021-09-09 15:33:39

*Thread Reply:* Awesome!! @Oisin Mac Aodha 🙂

Sara Beery (sbeery@caltech.edu)
2021-09-09 15:37:00

*Thread Reply:* @Greg Lipstein how was the ground truth collected?

Sara Beery (sbeery@caltech.edu)
2021-09-09 15:38:52

*Thread Reply:* Oh nevermind, I found it 🙂 https://www.drivendata.org/competitions/82/competition-wildlife-video-depth-estimation/page/392/#about-the-data

DrivenData
👍 Greg Lipstein
Greg Lipstein (greg@drivendata.org)
2021-09-09 15:38:58

*Thread Reply:* Thanks @Sara Beery! Check out About the Data section here https://www.drivendata.org/competitions/82/competition-wildlife-video-depth-estimation/page/392/

DrivenData
Greg Lipstein (greg@drivendata.org)
2021-09-09 15:39:03

*Thread Reply:* jinx

😁 Sara Beery, Jon Van Oast
Sara Beery (sbeery@caltech.edu)
2021-09-09 15:39:13

*Thread Reply:* So, along those lines, any sense of what the accuracy in this manual labeling is?

Sara Beery (sbeery@caltech.edu)
2021-09-09 15:39:54

*Thread Reply:* @Elijah Cole (Deactivated) and I captured a similar thing for some of our cameras, but I got the sense that there was a lot of room for human error/subjectivity to creep in.

Greg Lipstein (greg@drivendata.org)
2021-09-09 15:50:10

*Thread Reply:* That would be a good question for the MPI-EVA team. From what we’ve heard, that's right; it's often a time-consuming, manual, and error-prone process to attach these distance labels. That said, some noise in labels is to be expected, and the difficulty labeling provides some of the value from having ML solutions that can help. The scale of the data also helps here, especially to the extent the noise is random

Sara Beery (sbeery@caltech.edu)
2021-09-09 15:54:27

*Thread Reply:* Absolutely! I'm curious if there's an upper bound on performance based on label error, and/or if the error is biased for a given labeler (ie maybe I'm likely to always label closer than reality, for example)

Sara Beery (sbeery@caltech.edu)
2021-09-09 15:56:02

*Thread Reply:* Another interesting thing to look at would be agreement between labelers, did multiple experts label each image?

Sara Beery (sbeery@caltech.edu)
2021-09-09 15:56:26

*Thread Reply:* (I realize these are not necessarily things you have answers to, just thinking out loud 🙂 )

Oisin Mac Aodha (macaodha@caltech.edu)
2021-09-09 15:57:36

*Thread Reply:* Looks cool.

Greg Lipstein (greg@drivendata.org)
2021-09-09 16:02:51

*Thread Reply:* > I realize these are not necessarily things you have answers to, just thinking out loud 🙂 Totally! Those are all good thoughts

Oisin Mac Aodha (macaodha@caltech.edu)
2021-09-09 16:57:38

*Thread Reply:* @Greg Lipstein I see that you tried our monodepth2 code in the example image - very cool. Have you tried Midas, it is trained on much more diverse data? https://github.com/isl-org/MiDaS

❤️ Sara Beery
Greg Lipstein (greg@drivendata.org)
2021-09-09 17:20:13

*Thread Reply:* We did! I haven't seen Midas but I'm also not the right one to ask 🙂 . Much better to check with our (fantastic) data scientist @Emily Dorne

Emily Dorne (emily@drivendata.org)
2021-09-09 17:33:32

*Thread Reply:* @Oisin Mac Aodha I have seen the paper but haven't yet tried out the code! Very cool to see transformers being used in this space

👍 Oisin Mac Aodha
Ștefan Istrate (stefan.istrate@gmail.com)
2021-10-11 08:36:34

*Thread Reply:* Hi @Greg Lipstein! What does the ground truth distance mean when multiple animals are present in the image? I couldn't find any reference in the resources listed on DrivenData.

Katie Wetstone (she, her) (katie@drivendata.org)
2021-10-11 12:26:38

*Thread Reply:* @Ștefan Istrate Wherever possible, frames with multiple animals present have been excluded from the labels that participants have to predict. However, the hand-labeled ground truth is not 100% comprehensive and there are some unlabeled animals. We used the megadetector model to filter out any frames with multiple high-probability bounding boxes, but the predictions are not perfect and a small number may have slipped through - looks like you found a good example of this. The estimated bounding box given is the one with the highest probability, but may not correspond exactly to the label. These cases should be very rare in the data!

👍 Sara Beery
Ștefan Istrate (stefan.istrate@gmail.com)
2021-10-11 12:29:39

*Thread Reply:* Makes sense, thanks!

Katie Wetstone (she, her) (katie@drivendata.org)
2021-10-11 12:30:10

*Thread Reply:* happy to help!

Sara Beery (sbeery@caltech.edu)
2021-09-10 08:03:18

IJCV Special Issue focusing on Animal Tracking and Modeling.

https://www.springer.com/journal/11263/updates/19611514?gclid=Cj0KCQjw4eaJBhDMARIsANhrQADTS1NAfl1ZXtPL12noP3x7tzZTV6Ognaw7AtBKmmogofes61waAvY6EALw_wcB

Springer
👍 Benjamin Kellenberger, Emily Charry Tissier, Chris Yeh, Jason Holmberg (Wild Me)
😎 Jon Van Oast
Petar Gyurov (pgyurov93@gmail.com)
2021-09-13 06:42:28

Anyone know of any libraries/software that can help with identifying duplicate images? I want to make sure my training and test sets don't share any images. A quick search returned imgdiff but curious if there's anything purpose built. Cheers

Daniel Davila (daniel.davila@kitware.com)
2021-09-13 10:48:06

*Thread Reply:* My group is working on this problem a bit, would be happy to share if interested. There are a few interesting papers in this area as well:

Semantic Redundancies in Image-Classification Datasets: The 10% You Don’t Need https://arxiv.org/pdf/1901.11409.pdf A critical look at the current train/test split in machine learning https://arxiv.org/pdf/2106.04525.pdf Learning From Less Data: A Unified Data Subset Selection and Active Learning Framework for Computer Vision https://arxiv.org/pdf/1901.01151.pdf Do we train on test data? Purging CIFAR of near-duplicates https://arxiv.org/pdf/1902.00423.pdf Apple Neural Hash (CASM) Report https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf

Also a few commercial options:

https://www.lightly.ai/post/how-redundant-is-your-dataset

👍 Justin Kay, Petar Gyurov
Petar Gyurov (pgyurov93@gmail.com)
2021-09-13 11:18:15

*Thread Reply:* Thanks! Lots of good stuff in there. I thought about using Apple's Neural Hash for this... I wonder how effective it would be. Do share your work! I am interested to learn more about this.

aruna (arunas@mit.edu)
2021-09-13 11:31:15

*Thread Reply:* I have used the perceptual hash for this, with decent though not 100% perfect results: https://github.com/knjcode/imgdupes

Stars
193
Language
Python
👍 Petar Gyurov
Petar Gyurov (pgyurov93@gmail.com)
2021-09-13 12:42:32

*Thread Reply:* That looks great, thanks! Will give it a go.

Daniel Davila (daniel.davila@kitware.com)
2021-09-13 12:56:33

*Thread Reply:* When you say "share" images between train/test, what exactly do you mean? It's a really interesting question because there is a big difference between finding exact duplicates, near duplicates (smooth augmentations such as color shifts, translations, crops, rotations...), and finding similar images which are not reachable at all in augmentation space all but are so semantically similar that you would never want to both train/test on them.

Petar Gyurov (pgyurov93@gmail.com)
2021-09-13 13:06:49

*Thread Reply:* I was initially only considering exact duplicates but now you've opened my eyes! I'd love to get a measure of near duplicates and similiar images but I wouldn't consider it a priority for what I am doing just yet.

Caleb Powell (cpowel21@asu.edu)
2021-09-13 14:51:38

*Thread Reply:* My first instinct would be to hash them. I've never used it, but this might be worth checking out: https://github.com/JohannesBuchner/imagehash

Stars
2092
Language
Python
👍 Petar Gyurov, Yuval Boss
Suhail Alnahari (alnah005@umn.edu)
2021-09-14 13:56:13

*Thread Reply:* If you're using image hashes, specifically apple's neural hash, be aware of this https://blog.roboflow.com/neuralhash-collision/

Roboflow Blog
Written by
Brad Dwyer
Filed under
News
😅 Caleb Powell
David Rolnick (dsrolnick@gmail.com)
2021-09-15 20:42:49

Focus issue from AIHub on "Life on Land": https://aihub.org/2021/09/10/focus-on-life-on-land-call-for-contributions/

AIhub.org connecting the AI community and the world. - Association for the Understanding of Artificial Intelligence
👍 Sara Beery, Jason Holmberg (Wild Me)
😎 Jon Van Oast, Jason Holmberg (Wild Me)
Sara Beery (sbeery@caltech.edu)
2021-09-17 10:21:25

We are thrilled to announce the call for applicants for the first annual Resnick Sustainability Institute Summer School on Computer Vision Methods for Ecology (https://cv4ecology.caltech.edu/ ). This intensive, three-week program will teach applied computer vision methods to senior ecology graduate students and postdocs, and will be hosted at Caltech from August 1-20, 2022. Students will develop hands-on computer vision systems to help answer their own ecological research questions, using their own data. They will receive daily mentorship from a passionate team of computer vision experts with a track record of impact in conservation and sustainability. Each student will be provided with $2500 in cloud credits to facilitate their project development sponsored by Microsoft AI for Earth and Amazon AWS. Our team of instructors will work with applicants leading up to the intensive to curate computer-vision-ready labels for their data that will be used to prototype systems for their research questions during the class. Students will leave the course empowered to build their own computer vision models for ecological applications, and gain skills in problem formulation, dataset curation, model training, model evaluation, and hosting models for inference. Acceptance to the 2022 school will be competitive, we plan to accept only 20 students who will be evaluated on their past work as well as their proposed projects. Applications are due December 1st, and we strongly encourage applicants from minoritized groups in academia.

https://twitter.com/cv4ecology/status/1438867424078557187

👍 Benjamin Kellenberger, Jason Holmberg (Wild Me), Riccardo de Lutio, Carly Batist, Lily Xu, Nico Franz, Ted Schmitt, Stefan Schneider, Casey Youngflesh, Ayan Mukhopadhyay, Chris Yeh, David, Björn Lütjens
🎉 Suzanne Stathatos, Lily Xu, Oisin Mac Aodha, Nico Franz, Omiros Pantazis, Gracie Ermi, Carly Batist, Stefan Schneider, Mitch Fennell, Olivier Gimenez, David, Talia Speaker
❤️ Jon Van Oast, Ben Weinstein, Justin Kay, Stefan Schneider, Hannah Yin, Mitch Fennell
Sara Beery (sbeery@caltech.edu)
2021-09-17 10:21:50

*Thread Reply:* Please share widely!!

👍 Jon Van Oast
Lily Xu (lily_xu@g.harvard.edu)
2021-09-17 11:03:12

*Thread Reply:* this is a wonderful initiative, Sara! so glad that you're doing this!

❤️ Sara Beery
Gracie Ermi (gracieermiifthen@gmail.com)
2021-09-17 11:37:23

*Thread Reply:* This is so awesome, Sara! Will definitely spread the word!

❤️ Sara Beery
Jon Van Oast (jon@wildme.org)
2021-09-17 11:53:37

*Thread Reply:* this is really great!

❤️ Sara Beery
Casey Youngflesh (caseyyoungflesh@gmail.com)
2021-09-17 13:48:26

*Thread Reply:* So awesome!!

❤️ Sara Beery
Sara Beery (sbeery@caltech.edu)
2021-09-22 09:47:29

"The annual LifeCLEF workshop will take place tomorrow by videoconference during the CLEF 2021 conference.  As every year, we will present the main results of the challenges organized and we will also have 3 invited speakers who will talk more broadly about the application of AI for citizen science, biodiversity and land management. The detailed programme of the workshop and the links to zoom videoconferences can be found on LifeCLEF website: https://www.imageclef.org/LifeCLEF2021"

👍 Oisin Mac Aodha, Ștefan Istrate, Justin Kay
:thumbsup_all: Frederic Fol Leymarie
😎 Jon Van Oast
Sachith Seneviratne (sachith.seneviratne@unimelb.edu.au)
2021-09-23 10:53:44

Hello everyone, I participated in one the LifeCLEF challenges and found my way here after Sara's very interesting talk. Greetings from down under!

👋 Stefan Schneider, Benjamin Kellenberger, Caleb Powell, Jon Van Oast, Sara Beery, Ben Weinstein, Elijah Cole (Deactivated), Jason Holmberg (Wild Me)
Carly Batist (cbatist@gradcenter.cuny.edu)
2021-09-24 11:41:18

@Gracie Ermi and I just migrated our Conservation Tech Directory site to a new platform that is more user-friendly and aesthetic. It has a snazzy link title now too! - conservationtech.directory. Check it out!

This update also comes with ~30 new entries as well, bringing the total count to 447 now. As before, use this Google form to submit resources that should be on here but aren’t yet. And downloadable PDF of the directory can be found on FigShare.

conservationtech.directory
🙌 Jes Lefcourt, Jason Holmberg (Wild Me), Jon Van Oast, Sara Beery, Lily Xu, Talia Speaker, Ștefan Istrate, Suzanne Stathatos, Mitch Fennell, Megan Cromp, Kirk Larsen, Petar Gyurov
🙌:skin_tone_3: Hannah Yin
Gracie Ermi (gracieermiifthen@gmail.com)
2021-09-24 13:29:17

*Thread Reply:* Also, please feel free to use the google form to submit corrections to resources that are already in the list. We want this to be as accurate as possible, so we’re all ears if you have a correction or a piece of info to fill in!

💯 Carly Batist, Jason Holmberg (Wild Me), Ted Schmitt, Lily Xu
Chris Yeh (chrisyeh96@gmail.com)
2021-09-28 16:28:20

Does anyone know any high-quality ML-friendly datasets for any of the following: • measuring carbon stocks of forests • counting / identifying solar PV and/or wind farm installation • mapping CO2, methane, and/or other GHG emissions • estimating local-level industrial carbon intensity • detection / classification of deforestation activity By "ML-friendly," I mean that the dataset must have well-defined train/test splits and evaluation metrics. Basically, I'm looking for examples of good ML datasets for monitoring indicators that are commonly associated with climate change. Many thanks in advance!

👍 Björn Lütjens, Sara Beery
Sara Beery (sbeery@caltech.edu)
2021-09-28 16:28:55

*Thread Reply:* @Björn Lütjens @Ben Weinstein

👍 Björn Lütjens
Ben Weinstein (benweinstein2010@gmail.com)
2021-09-28 16:31:38

*Thread Reply:* I do not know of any ML ready datasets for these metrics.

Ben Weinstein (benweinstein2010@gmail.com)
2021-09-28 16:32:01

*Thread Reply:* I could recommend about how one could be created, but I would be surprised if they existed.

Ben Weinstein (benweinstein2010@gmail.com)
2021-09-28 16:33:44

*Thread Reply:* there was a student from geohackweek I taught at UW who was using images from industrial traffic from west africa to try to estimate carbon intensity and emissions. I could find her name.

Björn Lütjens (bjoern.luetjens@gmail.com)
2021-09-28 16:47:15

*Thread Reply:* Hi Chris! Agreed with Ben - there doesn't exist an ML ready carbon dataset ,but some people are working on it. This community has assembled a GitHub list on processed and raw datasets in forests which be helpful for you: https://github.com/blutjens/awesome-forests

If you're just looking for nice climate-tangent ML ready datasets ,the Kaggle datasets in this list would be the perfect fit for you. Lmk if that helps

Stars
11
Last updated
a month ago
Daniel Davila (daniel.davila@kitware.com)
2021-09-28 16:49:14

*Thread Reply:* Which scale are we talking about? Handheld cameras or satellites?

Chris Yeh (chrisyeh96@gmail.com)
2021-09-28 16:58:18

*Thread Reply:* @Ben Weinstein: Thanks for confirming my hunch that such datasets are hard to come by.

@Björn Lütjens: Thanks for that GitHub link! A couple of the deforestation tracking ones look relevant.

@Daniel Davila: either!

Daniel Davila (daniel.davila@kitware.com)
2021-09-28 17:04:43

*Thread Reply:* For emissions in particular, I can think of a few leads... all commercial companies though. There are a few groups working on co2 and methane plume detection. The two leaders I am familiar with (having worked for the former) are SwRI with SLED and FLIR systems with their OGI line of cameras. They both have extensive methane plume datasets, but whether they would allow you access to them is a different story. There are some air/space-borne flavors of this tech offered by other (mostly proprietary) sources like GHGSat and the EDF's upcoming MethaneSAT mission, which we'll have to see if they make that data public and annotated.

Chris Yeh (chrisyeh96@gmail.com)
2021-09-28 17:08:00

*Thread Reply:* @Daniel Davila: Yeah, I'm aware that many commercial companies working on this space. Just to confirm - you aren't aware of any publicly + freely accessible ML datasets for emissions monitoring and/or plume detection, right?

Daniel Davila (daniel.davila@kitware.com)
2021-09-28 17:08:30

*Thread Reply:* No, sorry! Somebody should fund the creation of one

Carly Batist (cbatist@gradcenter.cuny.edu)
2021-09-28 18:24:42

*Thread Reply:* Global Forest Watch tracks deforestation through ML analyses of geospatial data. I’m not sure if the actual training data is public but you might try reaching out the them? If anything the folks at WRI (makers of GFW) might have leads on the types of datasets you’re looking for.

globalforestwatch.org
Ben Weinstein (benweinstein2010@gmail.com)
2021-09-28 18:48:58

*Thread Reply:* @John Brandt is connected to this project.

Nico Lang (nila@di.ku.dk)
2021-09-30 07:28:48

*Thread Reply:* Hi Chris, 

ESA’s AI4EO initiative organised its first challenge to improve emissions monitoring. It was a super-resolution task. The challenge is already closed, but maybe you can find the dataset somewhere or contact the organisers. https://ai4eo.eu/ai4eo-challenge1 https://platform.ai4eo.eu/air-quality-and-health/data

AIREO is another ESA initiative with the aim of providing “AI-ready” datasets. I think the initiative is still in its early stages. Therefore, the prototype datasets currently available are rather small and not really suitable for training deep models. https://www.aireo.net/

cheers

👍 Chris Yeh
Ben Weinstein (benweinstein2010@gmail.com)
2021-09-30 12:40:00

*Thread Reply:* just clarifying here esa -> is european space agency, and not the ecological society of america. confused me.

👍 Nico Lang
Chris Yeh (chrisyeh96@gmail.com)
2021-10-02 20:53:02

*Thread Reply:* Thanks @Carly Batist - I am aware of Global Forest Watch. However, yeah, I'm concerned that the publicly accessible maps are not the original "ground truth." I may consider reaching out to WRI.

@Nico Lang - Thanks for sharing the AI4EO and AIREO datasets. Will take a look.

Sara Beery (sbeery@caltech.edu)
2021-10-04 17:56:47

Relevant to the above thread: https://www.frontiersin.org/research-topics/26080/forest-carbon-monitoring-and-artificial-intelligence

Frontiers
👍 Nico Lang, David, Jan Kees, Chris Yeh
Stuart Neilon (stuartneilon@gmail.com)
2021-10-13 09:50:23

Hello Everyone,

Stuart Neilon (stuartneilon@gmail.com)
2021-10-13 09:50:58

Does anyone know of a method to track individuals within a matrix of camera traps?

Sara Beery (sbeery@caltech.edu)
2021-10-13 09:53:44

*Thread Reply:* Do you mean re-identify specific animals across images seen at different camera traps in a grid?

Stuart Neilon (stuartneilon@gmail.com)
2021-10-13 10:02:57

*Thread Reply:* Yes. I apologise for the vague question. I am looking for a method of identifying indivduals of a particular species, that have been previously detected(by megadetector), to map their movements.

Sara Beery (sbeery@caltech.edu)
2021-10-13 10:04:39

*Thread Reply:* What species? There are methods that work well for certain types of patterned species (see https://www.wildme.org/#/ and https://sites.google.com/corp/view/wacv2020animalreid/home)

sites.google.com
Sara Beery (sbeery@caltech.edu)
2021-10-13 10:05:22

*Thread Reply:* @Jason Parham, @Maxime Vidal and @Stefan Schneider are the experts :) The first challenge is curating ground truth in order to robustly evaluate one of these methods on your specific data/population

Stuart Neilon (stuartneilon@gmail.com)
2021-10-13 10:08:46

*Thread Reply:* The family of Suidae is of interest, with Warthog being the most important. They do lack patterns, but their facial features(eg. tusk length) can sometimes be used to identify individuals.

Sara Beery (sbeery@caltech.edu)
2021-10-13 10:10:30

*Thread Reply:* Cool. I'm working with elephants currently, and we're exploring tusk length and shape as a contributing identifier but it's still work in progress. Is this something humans are able to do reasonably well for warthogs?

Stuart Neilon (stuartneilon@gmail.com)
2021-10-13 10:19:38

*Thread Reply:* I am no expert in identifying Warthog, but the size of the tusk and overall body size can give a resonable indication of age, which can help in identifying individuals.

Stuart Neilon (stuartneilon@gmail.com)
2021-10-13 10:39:29

*Thread Reply:* 1. Would it be feasible to train from a checkpoint(efficientnetb0-7). If it is feasable, how many images would be required, per individual, to achieve reasonable results.

Stefan Schneider (sschne01@uoguelph.ca)
2021-10-13 11:07:22

*Thread Reply:* Hey Stuart. Here's a paper I wrote that gives a 1000ft view of the field. There may be one or two techniques that standout to you from here.

https://besjournals.onlinelibrary.wiley.com/doi/epdf/10.1111/2041-210X.13133

If you're interested in state-of-the-art, you have a question of if the population is closed vs open. If you have a closed population where you can get images of every individual, you can use a traditional classifier (Conv Neural Network kind of thing). If the population is open, you'll need something like a similarity comparison network.

https://openaccess.thecvf.com/contentWACVW2020/papers/w2/SchneiderSimilarityLearnin[…]vidualRe-Identification-BeyondtheWACVW2020_paper.pdf

I have a version of this paper catered towards ecologists coming this Winter sometime. So if this one is a little too abstract, stay tuned for that.

👍 Sara Beery, Stuart Neilon, Mitch Fennell
Stuart Neilon (stuartneilon@gmail.com)
2021-10-13 11:31:22

*Thread Reply:* Awesome, thank you for your help @Sara Beery @Stefan Schneider !

Stuart Neilon (stuartneilon@gmail.com)
2021-10-13 12:46:37

*Thread Reply:* @Stefan Schneider Hi Stefan, in your paper it states that ideally, the dataset for training a re-ID system(similarity comparison), >500 individuals are needed. Would a triplet loss network still be advantageous over a CNN, in a smaller population(<100) therefore containing a higher number of images per individual.

Stefan Schneider (sschne01@uoguelph.ca)
2021-10-13 12:54:00

*Thread Reply:* Just so we're on the same terms, triplet-loss describes the loss used by a similarity comparison network, which is a form of Metric Learning. You could train a similarity comparison network using a number of different losses, triplet and contrastive are the two I use in the paper.

CNN and similarity comparison networks need approximately the same number of images. The advantage of similarity comparison networks is their accuracy will transfer to previously unseen individuals, and you only need ~500 images of a few individuals, whereas a CNN ideally has 500 images for every individual and will erroneously classify previously unseen individuals as one with data

500 is a rough ballpark number. You could get away with less if you go heavy on augmentations.

If animal re-ID is what you require, best bet would be to get started. Try training a similarity comparison network using a contrastive loss (it's easier) and just see what accuracies you get. You can go from there to determine if you need additional data, more augmentation, use of the triplet-loss, etc. If you've never trained a machine learning model before, start with a basic CNN and build from there. You'll need the CNN anyways for the similarity comparison network

🙌 Stuart Neilon
Stuart Neilon (stuartneilon@gmail.com)
2021-10-13 13:02:48

*Thread Reply:* Thank you very much, that is very informative.

Alina Zare (azare@ufl.edu)
2021-10-13 10:24:22

Hi Everyone - I am new to this Slack group and would just like to introduce myself. My name is Alina Zare. I am Professor in Electrical and Computer Engineering at the University of Florida. I lead the Machine Learning and Sensing lab at UF where we develop new AI/ML approaches with application to agriculture, ecology, and plant science (among other areas) - more here: https://faculty.eng.ufl.edu/machine-learning/. Looking forward to meeting/interacting with ya’ll.

👋 Sara Beery, Omiros Pantazis, Stuart Neilon, Declan, Lukas Picek, Elijah Cole (Deactivated), Jason Holmberg (Wild Me), Chris Yeh
👍 Lukas Picek, Jason Holmberg (Wild Me)
Scott Hosking (jshosking@gmail.com)
2021-10-13 11:32:10

The Alan Turing Institute are collaborating with CEFAS to run a two-phase Data Study Group (DSG) to explore and build toolkits for rapid identification of plankton using machine learning

Please feel free to pass on to your networks 😊

https://www.turing.ac.uk/events/data-study-group-november-2021

The Alan Turing Institute
🙌 Sara Beery, Elijah Cole (Deactivated), Oisin Mac Aodha
😎 Jon Van Oast
Sara Beery (sbeery@caltech.edu)
2021-10-14 13:50:33

NSF call on Biodiversity on a Changing Planet: https://beta.nsf.gov/funding/opportunities/biodiversity-changing-planet-bocp

Beta site for NSF - National Science Foundation
😎 Jon Van Oast
🤑 Carly Batist, Casey Youngflesh, Amrita Gupta
👍 Vincent Landau, Justin Kay, Bistra Dilkina
Diego Marcos (diego.marcos.gonzalez@gmail.com)
2021-10-15 05:24:40

*Thread Reply:* This call also opened recently (preproposals due on Nov 5th): https://www.biodiversa.org/1772

👍 Sara Beery
Benno Simmons (benno.simmons@gmail.com)
2021-10-18 12:22:58

Hi all! I’m a Lecturer in Ecological Data Science at the University of Exeter in the UK. I’m interested in lots of different things at the AI/ecology/conservation interface, but especially remote sensing of forests, ecological networks and camera traps. I’m putting together a small grant application for a 3 month project on supplementing the training datasets of species identification algorithms for camera traps with data augmentation methods, synthetic imagery and transfer learning. I’m aware of this (https://arxiv.org/abs/1904.05916) awesome paper by @Sara Beery. But are there any other examples? Don’t want to duplicate effort

arXiv.org
❤️ Sara Beery
Sara Beery (sbeery@caltech.edu)
2021-10-18 12:25:47

*Thread Reply:* I did a follow up with a student at TU Delft where we used image-to-image translation to improve on the results with the generic synthetic data: https://arxiv.org/abs/2106.12212

arXiv.org
👍 Benno Simmons
Ben Weinstein (benweinstein2010@gmail.com)
2021-10-18 12:39:02

*Thread Reply:* talks with @Sara Beery inspired a tree crown paper that had a similar vibe. using LiDAR to create weak labels for RGB learning. https://www.mdpi.com/2072-4292/11/11/1309

MDPI
👍 Benno Simmons
Vincent Landau (vincent.landau@gmail.com)
2021-10-18 17:57:32

*Thread Reply:* @Tony Chang

Zhongqi Miao (zhongqi.miao@berkeley.edu)
2021-10-18 18:18:25

Hello everyone! I am a sixth-year PhD student from UC Berkeley. We have just published a paper on Nature Machine Intelligence about deployable wildlife AI recognition systems with efficient humans in the loop and imperfect models! Please check it out if you are interested! This project is moving towards practical deployment in Africa. We hope this work can become an important step to the AI solutions for real-world conservation problems with imperfect AI models. I am very happy to answer any questions you have! Here is the link to the journal webpage (with a preprint link incorporated in case you don't have subscriptions): https://www.nature.com/articles/s42256-021-00393-0 Thank you very much!

Nature Machine Intelligence
👏 Sara Beery, aruna, Suhail Alnahari, Subhransu Maji, Omiros Pantazis, Lily Xu, Talia Speaker, Emilio Luz-Ricca
:thumbsup_all: Frederic Fol Leymarie, Monty Ammar
👋 Carl Boettiger
Matt Weldy (matthewjweldy@gmail.com)
2021-10-19 14:38:40

Hello everyone! I just joined this channel, and I am excited to see and explore some of the resources here. I am a first year PhD student at Oregon State University, in the US. Most of my dissertation work will explore extensions and applications of machine learning tools to bioacoustics data. In particular, I am interested in improving our in-place tools for processing and classifying bird call types with a focus on noisy, overlapping, multilabel data. There are very few people in my department working on applied ML projects, so I am interested in meeting others working on similar problems.

👋 Sara Beery, Lily Xu, Jon Van Oast, Jason Holmberg (Wild Me)
Lily Xu (lily_xu@g.harvard.edu)
2021-10-19 15:01:13

*Thread Reply:* Welcome, Matt! Are you by chance working with Prof. Rebecca Hutchinson? If not, you may be interested in her work at the intersection of ML + ecology 🙂

❤️ Sara Beery
Declan (declan.pizzino@consbio.org)
2021-10-19 15:25:57

*Thread Reply:* Hello from Corvallis! ❤️ OSU

👋 Matt Weldy
Matt Weldy (matthewjweldy@gmail.com)
2021-10-19 15:38:45

*Thread Reply:* I am not working with Rebecca. However, I took a few courses with her during my Master's. She is a great teacher and really knowledgeable about the intersection of ML and ecology. Especially about ML methods applied to distributions and interactions.

Zhongqi Miao (zhongqi.miao@berkeley.edu)
2021-10-19 15:40:33

*Thread Reply:* Welcome, Matt! You might also be interested in Justin Kitzes from UPitt who is doing very similar projects on bird calls and multi-label classifications.

👍 Matt Weldy
Carly Batist (cbatist@gradcenter.cuny.edu)
2021-10-19 17:19:10

*Thread Reply:* ^ And Tessa Rhinehart, who is a PhD student in that lab! Also might be worth talking to Stefan Kahl, who runs BirdNet (from Cornell Lab of O)

Sara Beery (sbeery@caltech.edu)
2021-10-19 20:14:39

*Thread Reply:* And @gvanhorn who just released sound ID in the merlin app!

👍 Carly Batist
Matt Weldy (matthewjweldy@gmail.com)
2021-10-19 22:16:07

*Thread Reply:* Thanks for the recommendations everyone. I'll reach out to a number of these researchers. Digging through some of the recent literature I get the feeling it might be time for a small symposium to gather people working through similar problems.

❤️ Sara Beery, Yves Bas, Carly Batist
Ross Gardiner (ross.gardiner@dynaikon.com)
2021-10-20 05:39:37

Hi, Can anyone recommend a dataset containing humans in a wildlife context (ie people with appropriate backgrounds)? The intention being to train an animal detector to distinguish human from animal observations.

Thijs (thijs@q42.nl)
2021-10-20 08:26:41

*Thread Reply:* I've found it's hard to get datasets with humans. Because these are usually stripped.

👍 Ross Gardiner
Thijs (thijs@q42.nl)
2021-10-20 08:27:50

*Thread Reply:* The humans that are in there are not respresentative, because it's usually people deploying or maintaining the cameras, very close up

Thijs (thijs@q42.nl)
2021-10-20 08:28:19

*Thread Reply:* In what context do you want to use the model?

Thijs (thijs@q42.nl)
2021-10-20 08:28:43

*Thread Reply:* I know the megadetector is pretty good at picking out humans.

👍 Sara Beery, Mitch Fennell
Sara Beery (sbeery@caltech.edu)
2021-10-20 09:33:46

*Thread Reply:* The human data is usually stripped before publication for ethical reasons, because the humans in the images have not given consent (and frequently are not aware their photo is being taken). MegaDetector already does a pretty good job of what you're describing, try it out here: https://github.com/microsoft/CameraTraps/blob/master/detection/megadetector_colab.ipynb

👍 Ross Gardiner
Ross Gardiner (ross.gardiner@dynaikon.com)
2021-10-20 09:43:26

*Thread Reply:* Thanks both. I want to train a small SSD model for edge deployment on a video camera trap. We are interested in human rejection from our captured images. Perhaps megadetector could be used to generate my training data... 🤔

👍 Sara Beery
Sara Beery (sbeery@caltech.edu)
2021-10-20 09:46:49

*Thread Reply:* Yeah I use it for weak labeling all the time!

👍 Ross Gardiner
Lily Xu (lily_xu@g.harvard.edu)
2021-10-20 11:14:56

*Thread Reply:* @Elizabeth Bondi has a BIRDSAI dataset with exactly this, for thermal imagery! https://sites.google.com/view/elizabethbondi/dataset

sites.google.com
👍 Elizabeth Bondi
Elizabeth Bondi (ebondi@g.harvard.edu)
2021-10-20 11:15:55

*Thread Reply:* Thanks @Lily Xu! Please let me know if you have any questions, @Ross Gardiner

Nasibah Azhari (nasibah.azhari@live.com)
2021-10-22 10:32:06

Hi☺️ I’m currently enrolled in an immersive data science program and I’m doing a presentation on spatial data science with a focus on its applications in conservation. I was wondering if anyone had any interesting resources/case studies/tips?

Carly Batist (cbatist@gradcenter.cuny.edu)
2021-10-22 12:01:36

*Thread Reply:* Check out WILDLABS, an online conservation tech community! And the Conservation Tech Directory for examples of resources/companies/organizations working in geospatial applications for the environment. Search ‘geospatial’ or ‘remote sensing’

conservationtech.directory
Nasibah Azhari (nasibah.azhari@live.com)
2021-10-23 09:12:41

*Thread Reply:* Thank you, @Carly Batist ! ☺️

👍 Carly Batist
Carl Boettiger (cboettig@berkeley.edu)
2021-10-22 12:19:08

👋 Hi all, I'm an Assistant Professor in UC Berkeley's Dept of Environmental Science, Policy, and Management where I focus in dealing with uncertainty in conservation models and decision-making. Recently we have been developing Deep RL approaches to sequential decision-making problems in conservation and I'm interested in developing a library of such problems that could serve as an open benchmark. (see https://boettiger-lab.github.io/conservation-gym/). My group is also interested in ecological forecasting, folks looking for open problems might consider submitting iterative, probabilistic forecasts to the ongoing NEON forecast challenge: https://projects.ecoforecast.org/neon4cast-docs/. Lastly, our group has an ongoing collaboration with political scientists and ethicists to better understand ethical and political considerations that arise from the application of AI to conservation problems. I'd be thrilled to discuss any of these themes!

projects.ecoforecast.org
👋 Declan, Jason Holmberg (Wild Me), Lily Xu, Ayan Mukhopadhyay, Sara Beery, Benno Simmons, Ritwik, Vincent Landau, Matt Weldy, Chris Yeh, David, Atriya Sen, Casey Youngflesh, Zhongqi Miao, Emilio Luz-Ricca, Carly Batist, Angjoo Kanazawa, Anthony Bao, Monty Ammar
Carly Batist (cbatist@gradcenter.cuny.edu)
2021-10-26 15:43:52

The Conservation Tech Directory has gone past the 500 mark!🎉🎊 Now at 512, to be exact. Thanks to all who have contributed new entries & corrections on our Add/Update a Resource form (keep them coming 🙂). AND my co-developer @Gracie Ermi has given us a logo (see below)!

conservationtech.directory
😍 Sara Beery, Gracie Ermi, Jason Holmberg (Wild Me), Lily Xu, Ankita Shukla, Olivier Gimenez
🎉 Gracie Ermi, Talia Speaker, Declan, Emily Charry Tissier, Jason Holmberg (Wild Me), Lily Xu, Agnethe Seim Olsen, Carl Boettiger
Olga Mierzwa-Sulima (olga@appsilon.com)
2021-10-28 05:29:28

Hi folks 👋 I'm a Data4Good Lead with a Data science background at Appsilon. I run there an internal program where we support sustainability initiatives. One of my colleagues @Jędrzej Świeżewski is already here for some time. We are the team behind Mbaza project, AI tool that helps classify animals from camera traps deployed in National Parks in Gabon. Next year we will have a strong focus on biodiversity. I'm here to learn about challenges that we could help solving with our team skills (ML Vision and data viz) and network with people from biodiversity space. I'd especially would love to meet researchers who's research would benefit from cooperation with us. If you want to grab a virtual coffee with me, send ma a DM. I'd love to chat!

The Independent
👋 Sara Beery, Talia Speaker, Jason Holmberg (Wild Me), Ayan Mukhopadhyay, Anthony Bao, Carl Boettiger, Carly Batist, Jędrzej Świeżewski
Angjoo Kanazawa (kanazawa@berkeley.edu)
2021-10-31 20:20:26

Hi all👋! What an amazing community! I’ve been lurking a little bit! My name is Angjoo and I’m an assistant professor at UC Berkeley in the department of Electrical Engineering and Computer Science, specializing in computer vision, graphics, and learning. My research is about perceiving the dynamic 3D world that underlies our images and videos. In particular I focus on non-rigid objects, such as humans or animals, and my thesis was on 3D reconstruction of Animals 🙂. This is such a challenging problem with a lot of depth that we will continue to pursue and I would love to hear from you about real problems and wishlists of what computer vision methods could provide! http://people.eecs.berkeley.edu/~kanazawa/

👋 Sara Beery, Justin Kay, Silvia Zuffi, Oisin Mac Aodha, Emily Charry Tissier, Benno Simmons, Emilio Luz-Ricca, Declan, Jason Holmberg (Wild Me), Armin Bazarjani
Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2021-11-01 05:59:38

*Thread Reply:* ようこそ! Great to see you here! 😄 And a great initiative too; I am likewise interested in getting to know more about the potential applications of vision(ary) work for conservation & Co.

🙂 Angjoo Kanazawa
Rowan Converse (rowanconverse@unm.edu)
2021-11-01 15:59:54

Hi all, I’m a PhD candidate in Geography at the University of New Mexico. My dissertation work is focused on detection and identification of waterfowl from UAS imagery. We have a sizable set of labeled UAS images of birds (mostly ducks) that we are interested in making public for research purposes-- any suggestions on a good platform to publish it? Thanks for any advice!

👋 Sara Beery, Oisin Mac Aodha, Declan, Jason Holmberg (Wild Me)
Sara Beery (sbeery@caltech.edu)
2021-11-01 16:01:11

*Thread Reply:* @Ben Weinstein

Carl Boettiger (cboettig@berkeley.edu)
2021-11-01 16:11:12

*Thread Reply:* @Rowan Converse What's the database size? For scientific research purposes, a DOI-granting repository like Zenodo.org (which is built on CERN's Data Center) is probably preferable.

👍 Sara Beery, Rowan Converse
Carl Boettiger (cboettig@berkeley.edu)
2021-11-01 16:16:29

*Thread Reply:* I believe up to 50 GB per 'dataset' is free, and there is not a limit on how many free datasets (https://help.zenodo.org/). They encourage you to contact them for larger use cases. You've probably already thought through copyright for distribution and possible concerns about geolocation of rare or protected species.

Sara Beery (sbeery@caltech.edu)
2021-11-01 16:30:27

*Thread Reply:* There's also LILA.science @Dan Morris

👀 Carl Boettiger
Dan Morris (agentmorris@gmail.com)
2021-11-01 16:57:02

*Thread Reply:* As Sara suggests, we host some data like this on lila.science, e.g.:

https://lila.science/datasets/aerial-seabirds-west-africa/

If you're interested, drop us an email at info@lila.science. Thanks!

LILA BC
Written by
lilawp
Est. reading time
2 minutes
👍 Rowan Converse
Carl Boettiger (cboettig@berkeley.edu)
2021-11-01 17:26:39

*Thread Reply:* LILA looks really cool, I hadn't seen that. No doubt there are some significant performance advantages in using Azure-based hosting, especially if your data is in the TB+ range (like those NOAA seals). It's generous of Microsoft to host such a resource. All the same, I do think it would be nice to see modern metadata standards, such as the FAIR principles, for data intended for scientific archives.

Carl Boettiger (cboettig@berkeley.edu)
2021-11-01 17:31:11

*Thread Reply:* For instance, for that seal image data, NOAA maintains a rich metadata record, https://www.fisheries.noaa.gov/inport/item/63322, which includes schema.org markup, allowing it to be included in linked-data indexes like https://datasetsearch.research.google.com or data.gov. DOI-granting repositories also integrate into publication services for purposes of citation metrics to the original data, track versions, and must meet a high standard of archival persistence for scientific research. These may be considerations if citability and discoverability are important, particularly if LILA grows to index a more datasets than can be easily browsed manually.

fisheries.noaa.gov
Ben Weinstein (benweinstein2010@gmail.com)
2021-11-01 17:48:06

*Thread Reply:* @Rowan Converse as part of the bird paper, we placed your portion on zenodo. See newmexico.zip https://zenodo.org/record/5033174#.YYBggNbMKlo I recommend zenodo for the rest.

👍 Rowan Converse, Sara Beery
Rowan Converse (rowanconverse@unm.edu)
2021-11-01 17:48:36

*Thread Reply:* Thanks everyone for the replies, this is great!

Carly Batist (cbatist@gradcenter.cuny.edu)
2021-11-02 08:36:29

*Thread Reply:* Also, figshare waives dataset size limits if it’s set as public (good for large datasets)

👍 Rowan Converse
Dan Sheldon (sheldon@cs.umass.edu)
2021-11-03 16:33:48

*Thread Reply:* Does anyone have experience/recommendations for archiving larger data sets — on the order of, say, 2-3 TB?

I checked with zenodo (a while back) and there was no mechanism that would work for both my institution and theirs to provide payment for the extra resources needed for a data set of this size, and I understand that figshare has a limit of 1TB.

Sara Beery (sbeery@caltech.edu)
2021-11-03 16:35:14

*Thread Reply:* There are several datasets on LILA that are many TB. I would reach out to Dan Morris (email info@lila.science).

👍 Dan Sheldon
Dan Sheldon (sheldon@cs.umass.edu)
2021-11-03 16:38:14

*Thread Reply:* Thanks!

Dan Sheldon (sheldon@cs.umass.edu)
2021-11-03 16:40:19

*Thread Reply:* I can ask Dan this, but do you know if LILA grants DOIs?

Sara Beery (sbeery@caltech.edu)
2021-11-03 16:42:42

*Thread Reply:* I'm not sure. If not I wonder if that's something that could be set up.

Carly Batist (cbatist@gradcenter.cuny.edu)
2021-11-03 16:56:15

*Thread Reply:* Oh really, I didn’t know figshare had a 1TB limit? (Though I’ve not actually tried) I know they have a ‘figshare+’ that allows up to 5TB but that requires a one-time data publishing charge (so you may run into the same problems you had with zenodo)

Dan Sheldon (sheldon@cs.umass.edu)
2021-11-03 16:59:00

*Thread Reply:* Thanks! I’ll look into the details of figshare. My information came from a publisher’s page and may not be up-to-date. A charge could be OK, the issue with zenodo was more that they didn’t have an easy payment mechanism.

Dan Morris (agentmorris@gmail.com)
2021-11-03 17:04:44

*Thread Reply:* LILA doesn't generate DOIs, plus if I told you we did, I would then immediately tell you not to use any DOI that I generated for you. 🙂 IMO you want to be in total control of any DOIs on which you take dependencies (e.g. by embedding in an archival publication).

[Now I will drift from facts about LILA into philosophy about DOIs...]

Regardless of where you host data, unless you are 100% sure your host will be there in 10 years, my personal recommendation is that you administer your own DOI.

👍 Sara Beery
Dan Sheldon (sheldon@cs.umass.edu)
2021-11-03 17:18:20

*Thread Reply:* Interesting!

Jorrit van Gils (vangilsjorrit@gmail.com)
2021-11-02 10:55:53

Dear people from AI for Conservation, I'm master-student Forest Nature conservation in Wageningen (The Netherlands). In my thesis I compare two deep learning behaviour classification techniques on wildlife camera trap images of Red deer. One technique includes pose estimation. As I'm really passionate about AI combined with camera traps and satellites, I'm looking for PhD positions or other career opportunities that allow to further develop in artificial intelligence after I graduate January 2022. Looking forward to meet you!

👋 Sara Beery, Carl Boettiger, Jason Holmberg (Wild Me), Lily Xu, Thijs
Thijs (thijs@q42.nl)
2021-11-03 12:18:16

*Thread Reply:* @Jorrit van Gils maybe we should have a quick chat. Sound like we're working on similar things (in the Netherlands)!

Sara Beery (sbeery@caltech.edu)
2021-11-02 12:19:28

Come learn more about the CV4Ecology summer school! Do you have any questions about the course? the application process? the timeline? Not sure if you're a good fit? We're hosting a zoom infosession on 11/4 at 9:30am PT (GMT -7:00), sign up here: https://docs.google.com/forms/d/e/1FAIpQLSdSOA-C9bJc1bhgdyB1vk3xVkcD8u10dSeaFRLZMoQydvn3TA/viewform

👍 Mitch Fennell, Carly Batist, Jason Holmberg (Wild Me), Rowan Converse, Reshu Bashyal
Axel Rossberg (axel@rossberg.net)
2021-11-03 14:04:56

Hi, I am a theoretical ecologist working on biodiversity, food webs, and meta-community ecology, and just joined AI for Conservation.

👋 Sara Beery, Stefan Schneider, Declan, Oisin Mac Aodha, Benno Simmons, Carl Boettiger, Nicolas Betancourt, Monty Ammar
Reshu Bashyal (bashyalreshu@gmail.com)
2021-11-10 00:11:16

Hi AI for Conservation! I am Reshu Bashyal, a conservationist based in Nepal. I have no prior experience in AI but am very much fascinated with how AI is being integrated in conservation activities. I know, this is a great platform for AI aspirants. Looking forward to learning from the community.

👋 Sara Beery, Arjun Subramonian (they/them), Suzanne Stathatos, Talia Speaker, Benjamin Kellenberger, Lukas Picek, Omiros Pantazis, Ritwik, Nicolas Betancourt, Declan, aruna, Angjoo Kanazawa, Jon Van Oast, Monty Ammar
:thumbsup_all: Frederic Fol Leymarie
Lucia Gordon (luciagordon@college.harvard.edu)
2021-11-11 21:04:52

Hello everyone! I recently joined AI for Conservation and am excited to be part of the community. I am a senior in college interested in pursuing a PhD in AI for Conservation. As I haven’t found any compiled list of academic labs pursuing this kind of work, I thought that we could create one for the community. Please add any academic labs you know of working on AI for Conservation to this spreadsheet!

👍 Sara Beery, Oisin Mac Aodha, Carly Batist, Hemal Naik, Jorrit van Gils, Monty Ammar
🙌 Emilio Luz-Ricca, Lily Xu, aruna, Gracie Ermi
🤩 Rosie Crawford
Suzanne Stathatos (suzanne.stathatos@gmail.com)
2021-11-12 10:07:27

*Thread Reply:* https://www.compsust.net/

👍 Sara Beery
Ted Schmitt (teds@allenai.org)
2021-11-12 13:58:37

*Thread Reply:* I’m not sure who to add but you should be aware of this work: https://www.wur.nl/en/show/Imagine-all-the-animals-living-life-in-sociable-groups.htm in the Netherlands

WUR
Katie Wetstone (she, her) (katie@drivendata.org)
2021-11-12 15:08:31

Hi everyone! 🐵 We are spreading the word about a free, open-source tool called Zamba that automatically detects and classifies animals in camera trap videos. If you use camera traps to capture videos, we’d love your feedback! 🐵 Try your videos with the species we cover or use our training functionality to make yourself a custom model just for your data. 

We want to make the tool as useful as possible, and are hoping to gather user feedback. In particular, we’d love to have users test out the Zamba python package. We’ve just released v2 of this package with brand new models and more features!

For background, we at DrivenData developed the tool in partnership with experts from the Max Planck Institute for Evolutionary Anthropology (MPI-EVA). A few basics about Zamba: • 🧠 Zamba uses artificial intelligence and computer vision to perform intensive camera trap video processing work, freeing up more time for humans to focus on interpreting the content and using the results. • :femaletechnologist::skintone_4:* *Zamba can be accessed through an easy command line interface or as a Python package - the code is all open-source on Github! • 🐘 Pretrained models are available to predict 42 different species common to western Europe and central Africa, plus blank versus non-blank. • 🌍 Zamba can be adapted to any set of species or ecosystem. Users can easily use their own labeled videos to generate a retrained model specific to their use case. • 📸 Zamba is trained on over 27,000 hand-labeled camera trap videos. A couple ways you can contribute: • Flag any bugs you find while using the Zamba python package - or submit an issue directly to the Github repo • Let us know which parts of the package documentation are confusing or could be improved • Train a new custom model and make it available to others through the Model Zoo Wiki You can send us any feedback or thoughts by commenting on this post, filling an issue on the GitHub repository, or by emailing info@drivendata.org. Close collaboration with subject area expects has been critical to the development of Zamba, and we look forward to hearing your perspectives! 🎉

😎 Jon Van Oast, Jason Holmberg (Wild Me), Lily Xu, Sachith Seneviratne, Ayan Mukhopadhyay
🎁 Peter Bull
👍 Sara Beery, Bistra Dilkina, Cameron Trotter, Ross Gardiner, Dan Morris
🎉 Emily Dorne, Jason Parham
Katie Wetstone (she, her) (katie@drivendata.org)
2021-11-12 15:11:49

*Thread Reply:* Tagging some of our awesome team members! @Peter Bull @Emily Dorne @Greg Lipstein

Océane (boulaisoceane@gmail.com)
2021-11-15 14:00:12

*Thread Reply:* has this been used for underwater ecosystems?

Peter Bull (peter@drivendata.org)
2021-11-15 14:08:39

*Thread Reply:* Not yet—do you have more info on how the data is collected? I expect that since some of the parameters are tuned 1 minute long videos from terrestrial motion-triggered camera traps there is likely a fair amount of work to tune the models to the underwater setting.

Hemal Naik (hnaik@ab.mpg.de)
2021-11-16 10:01:02

Hi everyone, which conferences or journals accept datasets? (Both biology and computer science venues would be fine) I am working with MPI- Animal Behavior and we are trying to push out lot of interesting and unique datasets for CV and ML community. I know that NeurIPS announced dataset track recently. Does anyone here have experience of publishing datasets? Would love to connect and get some feedback on our work.

Oisin Mac Aodha (macaodha@caltech.edu)
2021-11-16 10:10:40

*Thread Reply:* Hey Hemal.

We often have lots of dataset papers at the FGVC workshop at CVPR. e.g. from last year https://sites.google.com/view/fgvc8

sites.google.com
👍 Hemal Naik, Sara Beery
Beckett Sterner (bsterne1@asu.edu)
2021-11-16 11:14:28

*Thread Reply:* The Biodiversity Data Journal could be an option: https://bdj.pensoft.net/journals.php?journal_name=bdj

bdj.pensoft.net
Ayan Mukhopadhyay (ayanmukg@gmail.com)
2021-11-16 12:38:38

*Thread Reply:* Hi @Hemal Naik, Nature Scientific Data (https://www.nature.com/sdata/) is a great place and so is the NeurIPS database track. We just published one at NeurIPS so I would be happy to answer any questions that you might have.

👍 Sara Beery
Hemal Naik (hnaik@ab.mpg.de)
2021-11-16 12:40:01

*Thread Reply:* Hi @Ayan Mukhopadhyay thanks for the offer. Would definitely like to get in touch. Can you share link of your paper. I assume the paper would also have communication email. Looking forward.

Ayan Mukhopadhyay (ayanmukg@gmail.com)
2021-11-16 12:44:00
Hemal Naik (hnaik@ab.mpg.de)
2021-11-16 12:49:04

*Thread Reply:* Thanks I will get in touch.

Ted Schmitt (teds@allenai.org)
2021-11-19 19:58:37

Clare Barclay, Chief Executive Officer at Microsoft UK, said: “The untapped potential of AI and machine learning can help solve some of the world’s most complex environmental challenges. Our first-of-its-kind multispecies AI model Project SEEKER can help tackle the wildlife trafficking trade, while protecting animal ecosystems. The importance of collaboration and partnership with more organisations couldn’t be greater as we look to protect the environment and the world’s most endangered species.” https://news.microsoft.com/en-gb/2021/11/18/first-of-its-kind-multispecies-ai-model-to[…]egal-wildlife-trafficking-is-ready-to-roll-out-to-airports/

Microsoft News Centre UK
Written by
|
Est. reading time
7 minutes
👍 Sara Beery, Jason Holmberg (Wild Me), Megan Cromp, Dan Morris
😎 Jon Van Oast, Jason Holmberg (Wild Me)
Gracie Ermi (gracieermiifthen@gmail.com)
2021-11-22 14:35:24

Thanks to @Lucia Gordon’s great idea to compile a list of academic labs doing conservation tech research, @Carly Batist and I have added a new “academic lab” tag to https://conservationtech.directory and a number of new entries in that category (we’re up to 568 total entries now 🎉). Know of any university groups working on conservation tech that aren’t included? Let us know so that we can get them added! And please keep letting us know about any corrections that need to be made to anything in the directory. Thanks everyone!

conservationtech.directory
❤️ Lucia Gordon, Sara Beery, Justin Kay, Talia Speaker, Lily Xu, Declan, Carly Batist, Atriya Sen
🎉 Jon Van Oast, Sara Beery, Lily Xu, Carly Batist, Atriya Sen
Lily Xu (lily_xu@g.harvard.edu)
2021-11-22 15:08:58

*Thread Reply:* Thank you Lucia for the idea and Gracie and Carly for the implementation!! What a wonderful resource (I wish I had this 4 years ago when I was applying to grad schools!)

😃 Lucia Gordon, Gracie Ermi, Carly Batist
Lily Xu (lily_xu@g.harvard.edu)
2021-11-22 15:12:35

Climate Change AI is excited to announce a call for applications for their upcoming summer school on Climate Change and Artificial Intelligence in 2022. This summer school (to be held virtually on weekdays between Aug 15th - 26th 2022 (time zone TBD), is designed to educate and prepare participants with a background in artificial intelligence and/or a background in a climate-change related field to tackle major climate problems using AI.

climatechange.ai
Climate Change AI
Climate Change AI
❤️ Sara Beery, Elijah Cole (Deactivated), Lucia Gordon, Chris Yeh, Sachith Seneviratne, Peter Bull, Stefan Schneider, Carly Batist, David, Gracie Ermi
Lily Xu (lily_xu@g.harvard.edu)
2021-11-22 15:12:58

*Thread Reply:* Climate Change AI Summer School 2022 Virtual: weekdays between Aug 15th – 26th, 2022 (time zone TBD)   Websitehttps://www.climatechange.ai/events/summer_school2022.html Application deadline: December 17th 2021 Contactsummerschool@climatechange.ai   The Climate Change AI summer school is designed to educate and prepare participants with a background in artificial intelligence and/or a background in a climate-change related field to tackle major climate problems using AI. The summer school aims to bring together a multidisciplinary group of participants and facilitate project-based team work to strengthen collaborations between different fields and foster networking in this space.   The first part of the summer school will consist of a mix of lectures and hands-on tutorials organized into two tracks, one focused on AI fundamentals and one focused on climate change. In both tracks, the program will provide an overview of machine learning applications in a broad range of climate change-related areas. This includes covering foundational machine learning methods and state-of-the-art tools, while underlining their advantages and limitations, and describing how they can be used in practice to address the climate crisis. The second part of the summer school will consist of a collaborative project at the intersection of climate change and machine learning. Participants will work together in multidisciplinary groups under the guidance of a mentor to develop AI-based solutions for climate change problems.   The cohort will be composed of applicants from complementary areas of study/work, to be selected on the basis of their background and experience as well as their motivation for joining the summer school.   Applications are due by: Dec 17, 2021 23:59 AOE (Anywhere on Earth, UTC-12). To apply, please submit your application through this form: https://forms.gle/9eSETySjVvdzkfiM6 Admission notifications will be sent out during the week of Feb 21, 2022.    The summer school is free to attend. Applicants who are accepted will be asked to confirm their attendance for the entire duration of the summer school. This course will be instructed by members of CCAI and world-renowned experts in ML and Climate Change. 

For further information please check out the summer school website or contact summerschool@climatechange.ai

accounts.google.com
Krasi Georgiev (krasi@arribada.org)
2021-11-23 08:22:47

@Dan Morris hey Dan, re this issue https://github.com/microsoft/CameraTraps/issues/260

I am working on developing a smart camera as part of the arribada.org initiative. The idea of the camera is to reduce human-animal conflict by providing an early alarming system for camps, villages etc.

Basically run a detection model on low cost/ low energy devices - rpi, arduino etc. and when it detects a lion or an elephant for example to send a signal to people in charge of the camp security.

We are working together with edgeimpulse.com on this project and for the idea was to avoid some manual tagging when building the ML model for the edge device.

Comments
3
😎 Jon Van Oast
Jonathan Crall (erotemic@gmail.com)
2021-11-24 14:32:50

To all those interested in small-object detection problems, augmented reality, web 3.0, open source benchmark datasets, change detection, and re-examining the existing academic publishing paradigm:

I've been working on a personal dataset-collection project for almost a year now, and it's at the point where I'm ready to start releasing details.

Allow me to introduce: ShitSpotter - An open source algorithm and dataset for detecting poop in pictures. The github README gives details about the motivation, data collection, annotation process, and algorithm. The dataset will be published on Web 3.0 via IPFS.

https://github.com/Erotemic/shitspotter

While this is only tangentially related conservation, I think there could be ecological applications of either the dataset itself, or the tools I'm going to write to bootstrap the annotation process. Collecting fecal samples is important in some aspects of ecology, and this dataset domain might be transferred to something like detecting feces (and perhaps even guess the species of the feces) in wooded areas as one metric for population monitoring.

Stars
2
Language
Python
😎 Jon Van Oast, Elijah Cole (Deactivated), Sara Beery, Jason Holmberg (Wild Me), Chuck Stewart, Sanjana Baliga, Declan, Yves Bas
💩 Jon Van Oast, Sanjana Baliga, Carly Batist, Agnethe Seim Olsen, Petar Gyurov, Jason Parham
Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2021-11-25 07:42:28

Hello everyone,

Are you interested in working on habitat suitability mapping with Earth observation and machine/deep learning, or happen to know an excellent MSc. candidate you could recommend for it? Thanks to our recently awarded Swiss National Foundation project “Learning unbiased habitat suitability at scale with AI”, we are looking for two motivated PhD candidates to join the ECEO lab at EPFL in Sion, Switzerland!

For more information see the official advertisement here: https://www.epfl.ch/about/working/enac-two-phd-positions-large-scale-habitat-suitability-mapping-with-machine-learning/ I am looking forward to hearing from you!

😍 Sara Beery, Nico Lang, Yihang She, Jorrit van Gils, Océane, Monty Ammar
Monty Ammar (montyx23@gmail.com)
2022-01-10 12:29:56

*Thread Reply:* Hey Benjamin, I actually came across this post today and was interested in applying. I’m still doing my masters which ends in September though, is it still alright to apply with the possibility of starting late if accepted?

Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2022-01-10 12:37:40

*Thread Reply:* Hello Monty! At this point we have a selection of applicants to be interviewed. Also, the chances of us postponing the project to the second half of the year are unfortunately slim, as we are bound by the grantees. I’d recommend to keep an eye out for announcements, though!

👍 Monty Ammar
Monty Ammar (montyx23@gmail.com)
2022-01-12 00:54:42

*Thread Reply:* Thanks Benjamin!

Cameron Trotter (cater@bas.ac.uk)
2021-11-25 10:05:29

Hi all, I’m looking for a (labelled) dataset of camera trap images which have been captured in the UK/Northern Europe - ideally from countryside/forested locations. Does anyone have any links to sets I could look at? Sets that contain images from this geographical area alongside others would be fine too provided it would be easy enough to subset and only use UK/NE images. Thanks in advance 🙂

👍 Sara Beery
Alan Ma (alanma393141@gmail.com)
2021-11-27 16:56:08

Howdy all,

I am currently working on a roadkill prevention project that involves using a trap camera to help collect wildlife activity data. It's purpose is to warn drivers of animal presence detected and simultaneously collect image data for monitoring wildlife activity along road environments in the pursuit of preserving wildlife/human wellbeing. In particular this project is expanding the computer vision side of things through greater species identification power. Via multiple infrared sensors and a radar sensor, the project hopes to detect wildlife presence through changes in the environment heat signatures and depth. Once the sensors have been triggered a NoIR camera snaps a picture of it's field of vision. After the picture has been collected, hopefully capturing wildlife in the process, a offline pre-trained ML model runs to identify the said wildlife. The identified wildlife species as well as metadata are incorporated to be mapped for the purpose of identifying regions of dense wildlife activity as well as possible migration patterns.

This project is looking for suggestions and collaboration due to my limited scope on the implementation of artificial intelligence, I was hoping to receive assistance from the community on improving the machine learning capabilities of this project. More specifically, the project is looking to expand on a larger wildlife species prediction capability * (1) and identify migration/animal behavior patterns (2). ( If you know or are willing to share a database of wildlife images to help improve this project's machine learning model it would be greatly appreciated! One of the major challenges in helping improve the current wildlife identification model is a lack of available images in model training.)*

If there are individuals willing to, I would love to discuss more deeply the methods used or other ideas on how to improve on the project's limitations.

Thanks in advance for your time - Alan

👍 Sara Beery, Jon Van Oast, Jason Holmberg (Wild Me), Kai Waddington
Subhransu Maji (smaji@cs.umass.edu)
2021-11-29 06:31:28

Hi all! I’ve been lurking for a (long) while. I’m an associate professor in computer science at university of Massachusetts, Amherst specializing in computer vision, graphics, and ml (https://people.cs.umass.edu/~smaji/). I’ve organized the fine-grained visual recognition workshops (https://sites.google.com/view/fgvc8) and lately been looking into analyzing bird migration via RADAR data (e.g., https://www.nature.com/articles/s41558-019-0648-9). On the cv+ml side we have been interested in semi- / self-supervised learning for fine-grained classification and have organized a couple of kaggle challenges at the last two FGVC workshops.

We have been thinking about doing the same for part segmentation. Question to the community: Are there any fine-grained segmentation (part labels) datasets of animals out there? We wanted to explore few-shot segmentation tasks, but couldn’t find any datasets of birds or animals with a large number of part labels. Most of these contain keypoints or a just few parts (3-4). For comparison, there are datasets with dozens of labeled parts for human faces or cars.

sites.google.com
Nature Climate Change
👋 Oisin Mac Aodha, Lily Xu, Sara Beery
Daniel Davila (daniel.davila@kitware.com)
2021-11-29 10:16:12

*Thread Reply:* It may be a stretch, but perhaps the recent pose datasets could be of interest? It is not segmentation-ready, but it may be useful to bootstrap some segmentation ground truth.

https://github.com/AlexTheBad/AP-10K

Stars
44
Language
Python
👍 Subhransu Maji
Subhransu Maji (smaji@cs.umass.edu)
2021-11-29 10:20:12

*Thread Reply:* Thanks, this looks great!

Diego Marcos (diego.marcos.gonzalez@gmail.com)
2021-12-21 10:02:15

*Thread Reply:* Hey @Subhransu Maji! Sorry for the delay here. The CUB dataset does have quite extensive part/attribute annotations for 200 species of North American bird species: http://www.vision.caltech.edu/visipedia/CUB-200.html

Sara Beery (sbeery@caltech.edu)
2021-12-01 11:30:01

DEADLINE EXTENSION

We've received multiple requests to submit late applications to the CV4Ecology Summer School, so to be fair to all applicants the deadline is being extended to Friday, December 3rd at midnight Pacific Time. We'll be emailing the mailing list and updating the webpage, please share!

👍 Jorrit van Gils, David, Mitch Fennell, Jason Holmberg (Wild Me), Talia Speaker, Casey Youngflesh, Carly Batist, Benjamin Kellenberger, Riccardo de Lutio, Elijah Cole (Deactivated)
Monty Ammar (montyx23@gmail.com)
2021-12-21 18:21:41

*Thread Reply:* Hey Sara, will this Summer school be running on a yearly basis?

David Will (david.will@islandconservation.org)
2021-12-03 15:44:37

Exciting volunteer opportunity on Robinson Crusoe Island, Chile.

We are pleased to announce the “Work for Humankind” partnership between Lenovo, Island Conservation and the Robinson Crusoe Island community that calls on volunteers from around the world to take part in a once-in-a-lifetime opportunity: to experience first-hand how to make a long-lasting difference with a remote island community, while working from one of the most remote offices in the world enabled by tech.

We are looking for volunteers with a range of skills, backgrounds, and specialties to travel to Robinson Crusoe to help prevent the extinction of endangered species and support the local community as it works toward achieving sustainability, using newly established internet connectivity and computing infrastructure hosted in a community technology hub established by Lenovo.

On the conservation side we are looking for volunteers with experience in machine learning, artificial intelligence, and data management to advance the first community-led, data driven island restoration project to protect the threatened Pink footed shearwater. This includes supporting the local team in training and implementing camera trap classification models and aggregating detection data and field observations into quantitative analysis of project progress.

Those interested in becoming one of the lucky volunteers to support this project on Robinson Crusoe Island while being able to continue working their current day jobs remotely can find out more and apply at www.LenovoWFH.com by December 30, 2021.

https://www.islandconservation.org/work-for-humankind-lenovo-invites-you-to-work-from-one-of-earths-most-remote-locations-with-smarter-technology/

Island Conservation
Written by
Claudio Uribe
Est. reading time
9 minutes
😍 Sara Beery, Elijah Cole (Deactivated), Jorrit van Gils, Océane, Arthur Wandzel
Justin Kay (justinkay92@gmail.com)
2021-12-03 16:47:45

*Thread Reply:* Do you think Lenovo would be open to paying participants for their conservation work? The work Island Conservation is doing is amazing and the conservation goals here are obviously very important - but if Lenovo wants to make a larger impact, it seems to me a great contribution would be to directly invest in the conservationist labor. Especially considering Lenovo has $60B+ in revenue and seems to be getting a good amount of marketing benefit out of this (their name is mentioned 26 times in this article). I also can’t help but feel the study on Gen Z work preferences is not recognizing the anxiety a lot of people face now due to lack of job security in the gig economy - being paid is also a great way to communicate that your work has value! Again, I’m all for the conservation goals, this is more so a comment on big tech getting benefit from free labor… maybe they can be convinced to invest further 🙂

🙌 Sean Carter
David Jarrett (david.jarrett@durham.ac.uk)
2021-12-04 06:43:45

Hi all, I'm @David Jarrett I'm new to the group, (it looks really interesting and useful!, thanks to @Dan Morris for the suggestion. I'm using Audiomoths to monitor breeding productivity in shorebirds (particularly Eurasian Curlew) in Scotland, aiming to develop binary classifiers using e.g. CNNs in Python over the next year or so to extract alarm calls / chick warning calls from large datasets. Really interested to talk to people who've done similar, share code etc.

A question to start - if I want to train a classifier on the call in this image attached, is there a material benefit to my labelling each individual call separately for training data (so resulting in 11 labelled pieces of data in example) as against labelling the group of calls (so having 1 piece of data here). In practise these calls most often occur in repeated sets like this, but can also occur in 1/2/3s etc.

Cheers, David

Beckett Sterner (bsterne1@asu.edu)
2021-12-04 18:07:59

*Thread Reply:* 👆@Caleb Powell

Yves Bas (yves.bas@gmail.com)
2021-12-05 04:26:52

*Thread Reply:* If you have little overlap in time AND frequency with other vocalisations, you can use basic automatic segmentation as a basis for a quick labelling, like in Tadarida software: https://openresearchsoftware.metajnl.com/articles/10.5334/jors.154/ You select groups of pre-segmented calls at once

openresearchsoftware.metajnl.com
Caleb Powell (cpowel21@asu.edu)
2021-12-06 13:29:31

*Thread Reply:* Sounds like a fun project! Assuming all the audio you'd like to process is of variable lengths, you might find it necessary to split the audio up into discrete pieces. With this in mind, knowing the time of each call would help you decide how best to split up the data.

If your training data is scarce, splitting it up can be useful for augmentations such as layering it over clips of environmental noise. The one catch here is you will have to take additional caution when splitting training and testing data.

Also, might be worth looking into attention modules (i.e., transformers). I've had great success in the past using them for audio processing. Here is a source that helped me: https://arxiv.org/abs/1803.02353

arXiv.org
Daniel Davila (daniel.davila@kitware.com)
2021-12-06 10:41:16

Hey folks, I wanted to share one of the projects I've been working on here at Kitware. We've built an open-source system (both software and hardware) that implements a drone payload for computer vision applications, called ADAPT. We noticed there are many groups out there spending hard-earned research dollars on re-engineering the same payload, over and over, that others have largely built already in their own silo. Our goal is to provide an MVP which reduces this redundant NRE required to field an AI-enabled drone application. In order to fly your own AI mission with high-quality georegistered data collection or analytics, you simply need to purchase the parts that we've curated and integrated, follow the build instructions, and bring your own AI model. This leaves more resources to municipalities, universities, and other stakeholders for the data-driven aspects of the mission.

As many of you know, we are an open source scientific computing company so this is not a product but rather a result of our research efforts funded through NOAA, which we make free to the community. We are currently seeking feedback on the payload, as well as stakeholders who might want to fly an upcoming mission and let us know how it goes. We’ve already proven out the system with our friends at the University of Alaska-Fairbanks, who built and flew an ice reconnaissance / data collection mission for just $10k (vs $100k+ from scratch). Please find more details about our project, including parts list and build instructions here: https://kitware.github.io/adapt/

Thanks yall! - Dan

kitware.github.io
😎 Jon Van Oast, Emilio Luz-Ricca, Suhail Alnahari
👏 Malte Pedersen, Juan Arrechea, Howard L Frederick
Ben Weinstein (benweinstein2010@gmail.com)
2021-12-06 11:55:04

*Thread Reply:* @Heather Lynch

Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2021-12-06 17:59:16

Hello! Can anyone recommend papers to read for state-of-the-art ML processing for camera trap videos? Looking for existing approaches, open source libraries, segmentation experiments, etc. Thanks in advance!

🎉 Sara Beery, Jon Van Oast
Peter Bull (peter@drivendata.org)
2021-12-06 18:08:06

*Thread Reply:* Hey @Jason Holmberg (Wild Me), hope you’re doing great! There’s not a ton of work at exactly the intersection of camera traps + video, but I’d be happy to share what we learned in our latest round of reviewing things and implementing zamba.

Peter Bull (peter@drivendata.org)
2021-12-06 18:08:48

*Thread Reply:* Shoot me an email and we can find a time

👍 Jason Holmberg (Wild Me)
Dan Morris (agentmorris@gmail.com)
2021-12-12 13:43:03

*Thread Reply:* FYI we have a bit of experience with using MegaDetector on videos... what we do is wildly inelegant (sample frames, run MD, do something sensible to aggregate the results from frames back up to videos), but has been useful to a number of users. Timelapse (which is how most of our users consume MD results) has good support for this, so it didn't require any new UI tooling. So, this is at least an "existing approach". Whether this is "state-of-the-art" is left as an exercise for the reader. 🙂

https://github.com/microsoft/CameraTraps/blob/master/detection/process_video.py

👍 Jason Holmberg (Wild Me)
Lily Xu (lily_xu@g.harvard.edu)
2021-12-07 11:13:28

Data4Wildlife challenge: https://www.bright-tide.co.uk/data4wildlifechallenge

Historically, the top three factors driving the global biodiversity extinction crisis have been habitat loss, human-wildlife conflict, and poaching. Over about the past decade, a new factor has propelled into the spotlight: online crime. Online wildlife crime causes species declines. The algorithmic amplification of wildlife crime via social media platforms, mobile applications and high-speed broadband access is a grave concern for nongovernmental organizations working to combat online crime. Join us for the #DATA4WILDLIFE challenge and help slow species declines! The #DATA4WILDLIFE challenge will take place on 29/30 January 2022. Applications are now open and will close at 22.00 on 12 January 2021. Teams will be announced on 17 January 2022 and drop in mentorship sessions will be held during that week to prepare teams for the two day challenge on 29/30 January 2022.

Bright Tide
🙌 Sara Beery, Jorrit van Gils
👀 Cameron Trotter
Sara Beery (sbeery@caltech.edu)
2021-12-07 14:15:49

https://www.nytimes.com/2021/09/30/opinion/animal-extinction.html?smid=tw-share

The New York Times
} By Henry M. Paulson Jr.
👀 Burak Ekim, Omiros Pantazis
🙁 Benjamin Kellenberger, Armin Bazarjani
Juan Arrechea (juan.arrechea.conservation@gmail.com)
2021-12-08 12:33:20

Hi all, I'm @Juan Arrecheanew to this group and already amazed with all the awesome stuff posted here! I am CS student at UT Dallas looking to learn more about all things related to computer science in conservation. I work at the Heard Natural Science Museum & Wildlife Sanctuary in sanctuary management/animal care. If anyone is near by please come visit! Huge thanks to @Gracie Ermi for sending me an invite.

👋 Sara Beery, Justin Kay, Jason Holmberg (Wild Me), Alex Borowicz, Benjamin Kellenberger, Gracie Ermi, Declan, Jon Van Oast
Mohit Dubey (mohit.dubey96@gmail.com)
2021-12-08 18:33:24

Hey everyone,

Mohit Dubey (mohit.dubey96@gmail.com)
2021-12-08 18:34:02

I'm @Mohit Dubey and I am a post-bac researcher at Los Alamos National Labs where I do robust machine learning for data from the Mars Rover ChemCam. Applying for PhD's in Climate/ML and looking forward to connecting to you all!

👋 Sara Beery, Declan, Jason Holmberg (Wild Me), Daniel Davila, Jon Van Oast, Ankita Shukla
Sara Beery (sbeery@caltech.edu)
2021-12-10 12:04:39

Rainforest XPRIZE is looking for a Tech Lead! https://www.linkedin.com/jobs/view/technical-lead-xprize-rainforest-at-xprize-2828856128

👍 Jon Van Oast
🎉 Jon Van Oast
Carl Boettiger (cboettig@berkeley.edu)
2021-12-14 16:14:49

Hi all -- a question for those here in any non-academic career trajectory: undergrads in my courses (i.e. environmental data science classes at Berkeley) will often ask for advice about careers: what background/experience/degrees they should pursue. What advice would you give? I feel I've a good grasp on describing the good & bad of academia and what's expected, and to some extent a grasp on federal agencies like NOAA/EPA, but increasingly out of my depth in private sector. What skills & experiences do you look for? What do you wish someone had told you as an undergraduate?

🙌 Sara Beery, Britney Muller
❤️ Lily Xu, Britney Muller
Carl Boettiger (cboettig@berkeley.edu)
2021-12-14 18:39:08

*Thread Reply:* Also realize that many of you in academic positions have a lot more experience with industry careers and with answering this kind of question for students than I do, so y'all fire away here as well please!

Daniel Davila (daniel.davila@kitware.com)
2021-12-15 09:54:19

*Thread Reply:* Hey again Carl! Ive been working for industry (mostly R&D groups) for while now, and at various points responsible for interviewing and hiring. Here are some general thoughts, hope they help.

I do not speak for any company in particular, but when evaluating non-researcher MLEs (researchers typically have a mandate very similar to their counterparts in academia), I am typically looking for people with a track record of solving interesting, application-minded problems. A solid grasp of the fundamentals is necessary, but what I usually want to see is evidence that a candidate can apply all that useful theory they learn in schools to actually solve an open ended customer problem. This can be shown with project work, contributing to open source tools, making an impact at a past internships, etc... Degrees I typically look for are the traditional STEM ones for this field (EE, CS, CompE, etc..). Other majors, including math, applied math, physics, other engineering disciplines (e.g. BME), and others arent disqualifiers but sometimes these candidates are weaker on the core tools needed for the job (like programming) and need to demonstrate that experience through prior project work as I mentioned. Non-academic project work also usually implies they have experience with programming languages in the wild, working on a team, and delivering under customer imposed deadlines. Also, it helps not to be an entitled jerk. 🙂 Cant tell you how many students from top programs Ive met who throw their resume in my face and expect a job offer on pedigree alone. Insta-no

👍 Carl Boettiger
Brian Cohen (bcohen@tnc.org)
2021-12-15 13:59:43

*Thread Reply:* If the students are interested in conservation/environmental fields, getting involved now as a volunteer is a great way to build experience specific to how agencies and organizations in that world operate, and even more importantly a great way to build a professional network that is just as important as the resume. The volunteer work doesn't have to be specific to tech, you just want to get involved in whatever the work is (habitat restoration, river cleanups, species protection, clean water, affordable housing, etc.) and then help the orgs figure out how tech can make that work more efficient, scalable, etc.

👍 Carl Boettiger, Sara Beery
Arthur Wandzel (arthur.wandzel@gmail.com)
2022-01-16 17:53:55

*Thread Reply:* Maybe recommend Designing your Life as a general resource for designing a nonstandard career path?

Hannah Yin (hannah.yin@rice.edu)
2022-02-21 16:03:18

*Thread Reply:* As someone who recently graduated from college with some (limited) industry experience, I'd say networking is key. Don't be afraid to cold email someone for an informational interview or chat over tea/coffee/video/phone if they're part of a startup/company working on sustainability and conservation.

👍 Carl Boettiger
Kakani Katija (kakani@mbari.org)
2021-12-16 17:25:13

Hi everyone!! I'm Kakani Katija, a bioengineer at MBARI (Monterey Bay Aquarium Research Institute) where I lead the Bioinspiration Lab (we do lots of underwater imaging, robotics, bioinspired design). We (along with LOADS of collaborators) recently launched FathomNet, an underwater image database that contains labeled data for ocean life that will hopefully grow with community contributions. Check it out (and the blog and github) and let us know where we can do better. If you're interested in the ocean space, I'm also happy to chat!

bioinspirationlab
Medium
🎉 Daniel Grzenda, Sara Beery, Jason Holmberg (Wild Me)
🦑 Daniel Grzenda, Sara Beery, Frederic Fol Leymarie, Jason Holmberg (Wild Me), Ben Weinstein
Ted Schmitt (teds@allenai.org)
2021-12-16 18:51:22

Please share this fantastic internship opportunity with anyone you think may be interested https://boards.greenhouse.io/thealleninstitute/jobs/3738739

boards.greenhouse.io
😍 Sara Beery, Jason Holmberg (Wild Me), Carly Batist
👍 Jon Van Oast, Charlotte, Jason Holmberg (Wild Me), Monty Ammar
Ben Weinstein (benweinstein2010@gmail.com)
2021-12-17 17:12:13

I just got this message from the Toronto Zoo, anyone know of a GUI that might help them. @Petar Gyurov ```I'm part of the Adopt-A-Pond program with the Toronto Zoo, and we set out camera traps over the summer to capture photos of animals who may be predating wild turtle populations.

We have over 400,000 photos to look through so we're trying to find something to help us out with the analysis.```

Ben Weinstein (benweinstein2010@gmail.com)
2021-12-17 17:12:48

*Thread Reply:* if any students here want to take on a small project.

👍 Fuzail Dawood
Elijah Cole (Deactivated) (ecole@caltech.edu)
2021-12-17 19:17:58

*Thread Reply:* @Benjamin Kellenberger Maybe AIDE is a good fit?

Caleb Robinson (calebrob6@gmail.com)
2021-12-17 20:01:16

*Thread Reply:* I think the Microsoft AI4E camera trap classifier lives on with Dan Morris -- he might be able to help them find which images have any animals

Caleb Robinson (calebrob6@gmail.com)
2021-12-17 20:33:31

*Thread Reply:* @Dan Morris

Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2021-12-18 01:42:53

*Thread Reply:* Good opportunity to finally combine MegaDetector (MD) with AIDE? It’s been on my list ever since. Pre-processing with MD and import into AIDE is another option that is almost immediately available. I’d be happy to help out if you find my software to be a potential candidate!

Stars
144
Language
Python
Carly Batist (cbatist@gradcenter.cuny.edu)
2021-12-19 02:20:02

*Thread Reply:* Megadetector or Zamba are good for this. @Sara Beery

Petar Gyurov (pgyurov93@gmail.com)
2021-12-20 05:50:09

*Thread Reply:* @Ben Weinstein you could point them to MegaDetector GUI . It depends if MegaDetector fits their needs in the first place. I haven't made updates to that project in some time but it's in a relatively stable state (I think there are some bugs surrounding metadata export). Feel free to point them to me if they want to discuss things. Cheers.

Stars
19
Language
JavaScript
👍 Benjamin Kellenberger, Ben Weinstein
Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2021-12-20 05:51:29

*Thread Reply:* Cool project @Petar Gyurov! Didn’t know about it before.

👍 Petar Gyurov, Sara Beery
Dan Morris (agentmorris@gmail.com)
2022-01-10 12:09:52

*Thread Reply:* @Ben Weinstein MegaDetector may be helpful here, but I agree with Petar that it’s helpful to determine whether MegaDetector fits their needs in terms of accuracy and image distribution, and if it does, we can help them find a workflow to use the results. We try to “fail fast” if MegaDetector isn’t going to work for someone, so feel free to have them email cameratraps@lila.science and we’ll help them figure it out. If you’re interested in following along, fine to to have them cc you as well. Thanks!

Carly Batist (cbatist@gradcenter.cuny.edu)
2021-12-19 03:35:49

WILDLABS’ annual state of conservation tech survey is still accepting submissions! “The aim of the project is to advance efforts by capturing recent progress, key constraints, and critical opportunities for growth in leveraging technology for conservation. We are seeking input from individuals who have experience developing, using, or otherwise engaging with technology for conservation purposes.” https://colostate.az1.qualtrics.com/jfe/form/SV_aW6C50iOOzOPsB8

colostate.az1.qualtrics.com
👍 Fuzail Dawood, Sara Beery
Monty Ammar (montyx23@gmail.com)
2021-12-21 18:43:32

Hello everyone, I’m Monty Ammar. Im currently doing a Conservation Biology MSc at DICE University of Kent, and I have just found the most incredible community to be a part of!

My background is Animal Biology & Wildlife Conservation, I am doing a dissertation on understanding the effects of small scale gold mining in Amazonia on forest regeneration using ML. I am also writing a special topics module paper reviewing ‘The use of Deep Learning in Conservation Biology: consolidating progress and identifying research gaps’. Any pointers to already published review papers investigating the same topic would be appreciated!

I am so excited to be a part of this community and totally overwhelmed with the amount of useful resources being posted here. 😁

😍 Sara Beery, Lily Xu, Jason Holmberg (Wild Me), Juan Arrechea
🎉 Mark Roth
👍 Oisin Mac Aodha
Charlotte (chalange@uos.de)
2021-12-25 11:24:54

*Thread Reply:* Hello Monty, I stumbled about this very recent review (October 21) called "Seeing biodiversity: perspectives in machine learning for wildlife conservation" when I got interested in the same topic. They already did an amazing job and I think it might be of help to you. You can find it here: https://arxiv.org/abs/2110.12951

Merry Christmas!

arXiv.org
👍 Monty Ammar
❤️ Sara Beery
Monty Ammar (montyx23@gmail.com)
2021-12-26 19:43:56

*Thread Reply:* Merry Christmas! Thanks for this, it looks like a fantastic review. Will definitely be useful for me!

Dan Morris (agentmorris@gmail.com)
2022-01-10 12:19:37

*Thread Reply:* Though it’s specific to AI for camera traps, you may find the list of papers (most with one-paragraph summaries) here to be useful:

https://agentmorris.github.io/camera-trap-ml-survey/#camera-trap-ml-papers

Camera Trap ML Survey
👍 Monty Ammar, Sara Beery
Monty Ammar (montyx23@gmail.com)
2022-01-10 12:27:11

*Thread Reply:* This is vastly useful, thanks Dan 👌

Sara Beery (sbeery@caltech.edu)
2022-01-05 19:48:50

Biden-Harris Administration Invites Public Comment on Development of New Conservation and Stewardship Tool

Sierra Sun Times
💜 Arjun Subramonian (they/them), Omiros Pantazis, Kakani Katija, Jason Holmberg (Wild Me), David Russell, Yuval Boss, Armin Bazarjani, Dhruv Sheth, Carly Batist
Sara Beery (sbeery@caltech.edu)
2022-01-13 19:04:59

The CompSust Doctoral Consortium is an awesome place to present academic work in AI for Conservation and grow your community, applications due January 28! http://www.compsust.net/compsust-2022/

👍 Ankita Shukla, Bistra Dilkina, Dhruv Sheth, Justin Kay, Jason Holmberg (Wild Me), Mark Roth, Lily Xu, Carly Batist, Chris Yeh
🎉 Jon Van Oast, Dhruv Sheth, Jason Holmberg (Wild Me), Suzanne Stathatos, Lily Xu, Chris Yeh
Jason Parham (bluemellophone@gmail.com)
2022-01-14 02:31:27

*Thread Reply:* rubbing hands

Lily Xu (lily_xu@g.harvard.edu)
2022-01-14 12:19:05

*Thread Reply:* ^also open to the broader community beyond PhD students, including undergrad, master's, and postdocs as well as folks outside academia!

❤️ Sara Beery, Monty Ammar
Sara Beery (sbeery@caltech.edu)
2022-01-14 16:50:36

@Jake Wall and I brainstormed this joint local/international conservation technology internship at the Mara Elephant Project while we were in the field last August, and I'm so excited to see it come to fruition thanks to Ai2! If you have a student who might be interested, please pass the information along! https://boards.greenhouse.io/thealleninstitute/jobs/3820329

boards.greenhouse.io
❤️ Jon Van Oast, Bistra Dilkina, Ted Schmitt, Justin Kay, Suzanne Stathatos, Arjun Subramonian (they/them), Lucia Gordon, Mitch Fennell, Megan Cromp, Dhruv Sheth, Phuc Le, Talia Speaker, Avi Sundaresan, Armin Bazarjani, Catherine Villeneuve
💯 Carly Batist, Jason Parham
Sara Beery (sbeery@caltech.edu)
2022-01-14 16:51:07

*Thread Reply:* @Bistra Dilkina you were looking for undergraduate opportunities

❤️ Bistra Dilkina
Gracie Ermi (gracieermiifthen@gmail.com)
2022-01-14 16:53:28

*Thread Reply:* Whoa!! What a cool opportunity!

❤️ Sara Beery
Ted Schmitt (teds@allenai.org)
2022-01-14 17:03:43

*Thread Reply:* Kudos as well to @Jes Lefcourt for making this happen!

💯 Sara Beery, Gracie Ermi, Jake Wall
Sara Beery (sbeery@caltech.edu)
2022-01-14 17:05:34

*Thread Reply:* Yes absolutely!!

Carly Batist (cbatist@gradcenter.cuny.edu)
2022-01-15 00:12:21

*Thread Reply:* What are the start dates? And is there an application deadline? Is it paid/are expenses like flight/lodging covered?

Sara Beery (sbeery@caltech.edu)
2022-01-15 08:05:59

*Thread Reply:* @Ted Schmitt can probably best answer re: the logistical details from the Ai2 side

👍 Carly Batist
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-01-16 00:07:50

*Thread Reply:* Ok, asked about it on twitter too & didn’t get a reply. Seems like some important details…

Sara Beery (sbeery@caltech.edu)
2022-01-16 05:47:55

*Thread Reply:* I agree! I just don't know the answers :)

👍 Carly Batist
Sara Beery (sbeery@caltech.edu)
2022-01-16 05:48:33

*Thread Reply:* Though I'm 99.9% sure that it's paid and travel is covered

Carly Batist (cbatist@gradcenter.cuny.edu)
2022-01-16 07:38:47

*Thread Reply:* Yeah I figured it would be given the funders behind it. I’ll wait for Ted’s response. Thanks Sara!

Jes Lefcourt (jeslefcourt@gmail.com)
2022-01-18 08:53:05

*Thread Reply:* Sorry, I was double-confirming the details. The start date is flexible, and all expenses are paid.

❤️ Sara Beery, Carly Batist
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-01-18 12:07:13

*Thread Reply:* Ok great, thanks for the clarification Jes!

Charlotte (chalange@uos.de)
2022-01-19 05:25:33

*Thread Reply:* i know the start date is flexible, but is there still an application deadline?

Carly Batist (cbatist@gradcenter.cuny.edu)
2022-01-16 00:44:08

“TorchGeo is a PyTorch domain library, similar to torchvision, that provides datasets, transforms, samplers, and pre-trained models specific to geospatial data. The goal of this library is to make it simple:

  1. for machine learning experts to use geospatial data in their workflows, and
  2. for remote sensing experts to use their data in machine learning workflows.” https://github.com/microsoft/torchgeo
Stars
540
Language
Python
👍 Sara Beery, Oisin Mac Aodha, Cameron Trotter, Ritwik
Caleb Robinson (calebrob6@gmail.com)
2022-01-17 14:09:08

*Thread Reply:* Hey @Carly Batist, I work on torchgeo 🙂, please don't hesitate to reach out if you have any questions, comments, or something is broken!

Carly Batist (cbatist@gradcenter.cuny.edu)
2022-01-17 14:13:33

*Thread Reply:* Ah so sorry I didn’t tag you in the original post😬, didn’t realize you were already here! Saw it on Twitter and thought it would super useful

Carly Batist (cbatist@gradcenter.cuny.edu)
2022-01-17 10:10:10

💥New year, new update to the Conservation Tech Directory! 577 entries now🥳🤩. @Gracie Ermi https://conservationtech.directory/

Reminder - now with academic lab as resource type. If you’ve got a #ConservationTech lab, please add it! (link below) Google form to add new resources. And updated static doc/PDF on figshare.

conservationtech.directory
😍 Sara Beery
🎉 Megan Cromp, Omiros Pantazis, Lucia Gordon, Agnethe Seim Olsen, Talia Speaker, Gracie Ermi, Howard L Frederick
Lily Xu (lily_xu@g.harvard.edu)
2022-01-19 01:00:53

A wonderful internship opportunity at the Allen Institute to work on their Mara Elephant Listening Project

https://boards.greenhouse.io/thealleninstitute/jobs/3820329

boards.greenhouse.io
👋 Jason Holmberg (Wild Me), Lloyd Hughes, Monty Ammar
Catherine Villeneuve (catherine.villeneuve.9@ulaval.ca)
2022-01-20 19:07:35

Hi all! I’m new to this group. I’m a master’s student in Computer Science in Canada. I’m currently working on ML-based Arctic foxes movement models, to further our understanding of how predation influences prey distribution in a dynamic Arctic. I also work with snow geese, snowy owls, and lemmings. Is there anyone else here working on the Arctic? Or whose native language is French? In any case, I’m very happy to be part of this community. Looking forward to connecting to you all!

🐻‍❄️ Jason Holmberg (Wild Me), Dhruv Sheth, Mark Roth, Anthony Bao
❤️ Lucia Gordon, Sara Beery, Catherine
👍 Rhea Urquhart
Alexandre Tytgat (alextytgat@gmail.com)
2022-01-22 10:01:50

Hey everyone! I heard about this community from yesterday talk organized by Climate change AI (which was great!), and it looks amazing:) A bit about myself: I graduated last September from a data science master, but I started my studies with a BSc in physics (a subject I'm still passionate about). Now I'm turning my sight to what to do next, and one of my interest is the application of AI to help efforts in domains such as climate change mitigation/adaptation, biodiversity protection, and extreme poverty reduction. Hence why I joined this community 😉 Also, I'm from Belgium and speak French (as I saw @Catherine Villeneuve asking if anyone was a native in the language:)) I look forward to chat and share great resources on the subject of AI for conservation with all of you!

👋 Oisin Mac Aodha, Catherine Villeneuve, Mitch Fennell, Thijs, Mark Roth, Sara Beery, Juan Arrechea, Jason Holmberg (Wild Me), Dhruv Sheth, Monty Ammar, Anthony Bao
❤️ Catherine Villeneuve, Jason Holmberg (Wild Me), Catherine
Ben Weinstein (benweinstein2010@gmail.com)
2022-01-25 23:20:50

I am getting conflicting information on whether how/whether we can download google satellite images and use them in publications/training models. Maybe @Sara Beery and can get me an answer here. I see two different licenses with opposite interpretations. I saw this (https://www.mdpi.com/2072-4292/14/3/476) today which uses our lab's data + some google earth they scraped at 11cm.

MDPI
Sara Beery (sbeery@caltech.edu)
2022-01-25 23:28:26

*Thread Reply:* Google seems to have really complex policies, I can try to ask.

👍 Ben Weinstein
aruna (arunas@mit.edu)
2022-01-26 08:50:38

*Thread Reply:* Aside: Another imagery option is to request the ESA for data from the Pleiades satellites: https://www.intelligence-airbusds.com/imagery/constellation/pleiades/. The satellites offer high resolution imagery (50cm/px iirc and a 26 day revisit cycle). The data is free for researchers if accessing areas within a certain size limit.

intelligence-airbusds.com
Ben Weinstein (benweinstein2010@gmail.com)
2022-01-26 12:00:46

*Thread Reply:* thanks @aruna!, this is actually on my to do list, I know it went up last year, we have been purchasing from MAXAR. Do you have any experience with these data? For example, PLANET claims 50cm data, but it turns out that is a bit of a stretch, its 65cm data sharpened to 50cm through interpolation. I had a long discussion with their technical team.

👍 Sara Beery
aruna (arunas@mit.edu)
2022-01-26 21:42:36

*Thread Reply:* Hi Ben! No, unfortunately I don't have any experience with the actual data itself. Definitely sounds like you have spent more time with the team. have you had a chance to look at the actual data?

Sara Beery (sbeery@caltech.edu)
2022-01-26 09:45:27

https://latamt.ieeer9.org/index.php/transactions/announcement/view/41 CALL FOR PAPERS IEEE Latin America Transactions Special Issue on AI for Sustainability We are at the cusp of two massive historic trends intimately intermingled. On the one hand, our technological revolutions have increased our standards of living quality, including a longer lifespan and larger wealth. On the other hand, we are amid sustainability threats, including climate change and loss of biodiversity. The way our generation responds to the latter challenge using our considerable accumulated level of understanding will have a significant impact in generations to come. Artificial Intelligence (AI) for sustainability has emerged as a powerful tool with substantial groundbreaking advances. It has opened countless avenues to improve the understanding of the underlying problems while supporting effective sound solutions. However, to take full advantage of the available science and technology, we need to increase their visibility to stimulate its widespread adoption. AI refers to the development of machine capabilities such as learning, reasoning, knowledge representation, planning, perception, problem-solving, and pattern recognition. Due to the potential for broad impact of these capabilities, it is imperative to support sustainability efforts to extend their adoption and promote awareness about their potential. In particular, we call for participation through articles addressing issues along the following lines: a) Applications of AI tackling climate change, addressing biodiversity loss, and considering human vulnerability; b) research describing the construction, implementation, evaluation, and how-to-do capabilities supporting sustainability; and c) making sure AI leaves a small impact as possible on sustainability. The purpose of this Special Issue is to provide a forum for the researchers and practitioners related to the rapidly developing field of AI for sustainability to share their novel and original research on the topic. Therefore, we encourage the submission of the practitioners’ latest unpublished work on AI for sustainability. Of our particular interest are the developments from research groups in Latin America and elsewhere. The areas of interest include both theory and applications of AI on the following topics (they are not limited to): • Wild animal and plants identification and monitoring. • Electric systems. Usage forecast, greenhouse gases leaking, models with small datasets • Efficient transportation models, smart shipping routing, optimal bike-sharing distribution • Buildings and cities. Cooling and heating control, lighting adaptation, vehicular traffic analysis • Food waste, cement and ammonia reduction, • Farms and forests. Gases, agriculture, pipelines leaking, carbon, deforestation remote sensing • Carbon dioxide removal. Identification and monitoring of placement sites. • Weather prediction. Improved weather prediction models, long-term and reliable climate monitoring, ocean reflectivity, and warming monitoring. • Social impacts. Adaptation, interaction with the environment. • Carbon offset pricing, sustainability risk assessment, biodiversity, and climate change impacts. Call for papers: December 13, 2021 Submissions Deadline: March 21, 2022                                  Notification of Acceptance: May 23, 2022  Final Manuscript Due: July 4, 2022                                          Publication Date: September 2022

👍 Oisin Mac Aodha, Jason Parham, Dhruv Sheth, Anthony Bao
:thumbsup_all: Frederic Fol Leymarie, Dhruv Sheth
Oisin Mac Aodha (macaodha@caltech.edu)
2022-01-26 09:56:56

Apologies for the double post, but I see that not everyone is in the "upcoming_events" channel.

We are pleased to announce the upcoming  9th Workshop on Fine-Grained Visual Categorization (FGVC9) which will be held in conjunction with CVPR 2022 this June. • Paper submission deadline is 25th March 2022 • Website with more information is here: https://sites.google.com/view/fgvc9 Historically, we have had a great turn out from people in the intersection of conservation and technology. If you are working in that space and have applications in image understanding, please consider submitting your work.

👍 Sara Beery, Lily Xu, Armin Bazarjani, gvanhorn
🐪 Omiros Pantazis
🐫 Omiros Pantazis
Oisin Mac Aodha (macaodha@caltech.edu)
2022-01-26 09:57:08

*Thread Reply:* More info: ```FGVC9 - The Ninth Workshop on Fine-Grained Visual Categorization 19th June 2022 @ CVPR 2022 Website: https://sites.google.com/view/fgvc9 Twitter: @fgvcworkshop Email: fgvcworkshop@googlegroups.com

This workshop brings together researchers to explore visual recognition across the continuum between basic level categorization (object recognition) and identification of individuals (face recognition, biometrics). Participants are encouraged to submit short papers and to take part in a set of competitions organized in conjunction with the workshop - details below. We will also have an exciting lineup of invited speakers from computer vision through to domain experts.

PAPER SUBMISSION We invite submission of 4 page (excluding references) extended abstracts on topics related to fine-grained recognition. Reviewing of abstract submissions will be double-blind. The purpose of this workshop is not specifically as a venue for publication so much as a place to gather together those in the community working on or interested in FGVC. The workshop proceedings will not appear in the official CVPR 2022 workshop proceedings. Submissions of work which has been previously published, including papers accepted to the main CVPR 2022 conference are allowed.

PAPER SUBMISSION DATES * Deadline for Submission - 25th March 2022 * Notification of Acceptance - 25th April 2022 * Camera Ready - 6th May 2022 * Submission will be via CMT

Scope The purpose of this workshop is to bring together researchers to explore visual recognition across the continuum between basic level categorization and identification of individuals within a category population. Topics of interest include:

Fine-grained categorization Novel datasets and data collection strategies for fine-grained categorization Low/few shot learning Self-supervised learning Semi-supervised learning Transfer-learning Attribute and part based approaches Taxonomic prediction Long-tailed learning

Human-in-the-loop Fine-grained categorization with humans in the loop Embedding human experts' knowledge into computational models Machine teaching Interpretable fine-grained models

Multi-modal learning Using audio and video data Using meta data e.g. geographical priors Learning shape

Fine-grained applications Product recognition Animal biometrics and camera traps Museum collections Agricultural Medical Fashion

COMPETITIONS We will also be hosting several fine-grained computer vision challenges with tasks ranging from classification of attributes in art images through to classifying diseases in plants. The competitions will be hosted on Kaggle and will be announced in late February 2022.

ORGANIZERS Sara Beery - Caltech Serge Belongie - Cornell Elijah Cole - Caltech Xiangteng He - Peking University Christine Kaeser-Chen - DeepMind Oisin Mac Aodha - University of Edinburgh Subhransu Maji - University of Massachusetts, Amherst Abby Stylianou - Saint Louis University Jong-Chyi Su - University of Massachusetts, Amherst Grant Van Horn - Cornell Kimberly Wilber - Google```

Daniel Grzenda (grzenda@uchicago.edu)
2022-01-27 15:31:20

Anyone have a good reference or tool for converting an animal name to their scientific name? I'm working with data collected across 3 different languages (French, Spanish, English) identifying an animal by their common name and would like to standardize it to the scientific name (programmatically if possible)

Oisin Mac Aodha (macaodha@caltech.edu)
2022-01-27 15:35:40

*Thread Reply:* Have you seen the iNaturalist API: https://api.inaturalist.org/v1/docs/#!/Taxa/get_taxa

e.g. https://api.inaturalist.org/v1/taxa?q=european%20robin returns a json file with the species name

🙏 Daniel Grzenda
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2022-01-27 15:36:02

*Thread Reply:* We use the ITIS API

https://www.itis.gov/ws_description.html

itis.gov
🎉 Jon Van Oast, Sara Beery, Daniel Grzenda
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2022-01-27 15:36:12

*Thread Reply:* Not sure if it multilingual

Beckett Sterner (bsterne1@asu.edu)
2022-01-27 19:34:28

*Thread Reply:* Just keep in mind that common and scientific names frequently change meanings over time as the taxonomy is revised. Same name != same concept just like when you have to align multiple ontologies

👆 Sara Beery
Dan Morris (agentmorris@gmail.com)
2022-02-07 13:00:20

*Thread Reply:* We had to solve this problem to a slightly weaker standard than others may have: we wanted consistent mapping to a common taxonomy for lots and lots of records from all over the world, but because we weren't publishing the results, it didn't have to be "right" in every single case, just "right enough" and consistent, and we had to have a way of quickly making manual corrections. E.g. we had to have a way to permanently indicate that "lion" in our Idaho data meant something different from "lion" in our Snapshot Safari data... as you can imagine, there are 1000 special cases like that. We also needed a way to quickly verify the results by looking at images, since I don't actually know the scientific names for anything, and if you're not careful, you will map "wolf" to a fish (yes, there is a fish called "wolf"). So, with a huge dose of "YMMV", we ended up with the code here:

https://github.com/microsoft/CameraTraps/tree/main/taxonomy_mapping

...mostly here:

https://github.com/microsoft/CameraTraps/blob/main/taxonomy_mapping/species_lookup.py

...which basically downloads both the iNat and GBIF taxonomy files, maps lots of common-name queries into both taxonomies (preferring iNat), and makes a big spreadsheet that makes it easy (or at least minimally painful) to double-check all the mappings and manually fill in things that failed to map to anything.

I'm not sure I would advocate anyone actually running that code, but I would definitely advocate for the overall approach, which worked well and got the manual work of mapping naming schemes for zillions of images down to maybe an hour.

Kevin Webb (ktwebb86@gmail.com)
2022-01-28 10:51:27

Hi everyone, it’s great to be here! As a quick introduction, my name is Kevin Webb, and I invest in startups with positive impacts on biodiversity, so I’m excited to learn from you, share roles as they come up, and be a resource if my background can be useful. Some of the areas I’m most excited about here are AI for species and individual recognition, human-animal interaction, and the use of AI to inform conservation and climate-related decisions (like corridor placement). On the side, I also enjoy making things, and you can see past projects/writing at ktfoundry.com.

🐆 Jason Holmberg (Wild Me), Catherine Villeneuve
🎉 Jason Holmberg (Wild Me), Marcus Lapeyrolerie
Catherine Villeneuve (catherine.villeneuve.9@ulaval.ca)
2022-01-28 17:44:05

Could be interesting for some of you, a special issue from Frontiers on ML/DL for ecological monitoring : https://www.frontiersin.org/research-topics/32067/advances-in-machine-learning-and-deep-learning-for-monitoring-terrestrial-ecosystems. The deadline for the abstract is March 27, and the manuscript deadline is May 26.

Frontiers
😍 Sara Beery, Dhruv Sheth, Anthony Bao, Monty Ammar
Chris Yeh (chrisyeh96@gmail.com)
2022-01-28 18:03:51

Calling all researchers working on areas related to computational sustainability! We invite you to submit your work to the 5th annual CompSust Doctoral Consortium 2022. We are also looking for students interested in giving tutorials.   When and Where: March 11-12, 2022; Virtual Submission Deadline: extended to February 8, 2022 11:59 PM Pacific Time Submission requirements: 2-page research abstract   More Info: http://www.compsust.net/compsust-2022/   We invite students and researchers advancing research in computational techniques with applications to sustainability-related topics, which include • Clean energy • Wildlife conservation • Food security • Public health • Public Transportation • Climate action • Disaster adaptation • ... and many more!

❤️ Lily Xu, Enoch Luk, Sara Beery, Anthony Bao, Ayan Mukhopadhyay, Armin Bazarjani, Chris Yeh
Carl Boettiger (cboettig@berkeley.edu)
2022-01-31 10:48:18

This upcoming (virtual) conference could be of interest to anyone working on or interested in applying ML to the digitization of biodiversity museum collections: https://www.idigbio.org/content/digital-data-2022-enhancing-advancing-quality-digitized-data

iDigBio
😍 Sara Beery, Nico Franz, Kevin Webb, Libby Ellwood, Lily Xu, Akronix
Beckett Sterner (bsterne1@asu.edu)
2022-02-02 14:18:11

Subject: NASA Student Internship - Seeking Applicants   Hello Biological Diversity and Ecological Forecasting Program Communities, As you may know, we frequently host student interns supporting our programs at NASA Headquarters. This coming summer, we will again fill a post and are currently seeking applications. These are paid 10-week positions where students work remotely/virtually. The link to the position can be found https://urldefense.com/v3/https:/gcc02.safelinks.protection.outlook.com/?url=https*3A2F2Fnasacentral.force.com2Fs2Fcourse-offering2Fa0Bt0000004lRJQ2F&data=047C017Cbef40cce.nasa.gov7Cc780fd5303574d9d729b08d9e4fc277d7C7005d45845be48ae8140d43da96dd17b7C07C07C6377925939178668677CUnknown7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn03D7C3000&sdata=Ei4YJlD2FpJU2FwOjykROlVg5KwiDVH0KlxxF53TyRZuQ*3D&reserved=0;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!IKRxdwAv5BmarQ!KOP2y3DYe0jxLuNnOy50viGvZKg2FCweyq1mUrToMJxWapWVeDppunXE5WUDOGV2Q$|here. I have pasted the position description below, Please, share this opportunity with students you think would be a good fit.   Sincerely, Keith Gaddis   _Position Description This position will examine strategies for advancing the biological diversity and ecological forecasting programs within NASA’s Earth Science Division. Interns will advance the use of remote sensing for detecting, understanding, and forecasting patterns of life on Earth. This position will examine linkages and synergistic relationships between these programs and other activities within and outside of NASA.  _ _ Applicants with a background in conservation biology, ecology, evolution, computer programming, statistics, or communications/journalism are sought. Interns will advance outward communication of program activities, build program infrastructure, develop and implement evaluation metrics for science projects, and support science review. The mentor will develop projects specific to the intern’s background and interests on these themes.

😍 Sara Beery, Nico Franz, Catherine Villeneuve, Lily Xu, Jason Holmberg (Wild Me), Dhruv Sheth, Lucia Gordon
Björn Lütjens (bjoern.luetjens@gmail.com)
2022-02-03 18:05:39

Hello Everybody,

Does anybody know good ground-truth/validation datasets on wildlife, wildifire, tree crown segmentation, and/or belowground carbon quantification? We've been working on an awesome list that contains links and short decriptions to the best forest datasets out there. Thanks to contributions from this Slack there's already a lot of datasets but we're still missing links on those topics. I'd love to hear your finds, thank you so much! :):)

https://github.com/blutjens/awesome-forests

Stars
31
Last updated
4 days ago
❤️ Lily Xu, Emmanuel Dufourq, David
🌳 Emmanuel Dufourq
Sara Beery (sbeery@caltech.edu)
2022-02-03 18:07:35

*Thread Reply:* There are lots of great wildlife datasets on https://lila.science/

plus the iWildCam competition datasets https://github.com/visipedia/iwildcam_comp

And the iNaturalist datasets: https://github.com/visipedia/inat_comp

Stars
114
Last updated
a day ago
Stars
537
Language
Python
🎉 Björn Lütjens
🐝 Björn Lütjens
Ben Weinstein (benweinstein2010@gmail.com)
2022-02-03 18:38:53

*Thread Reply:* @Björn Lütjens does this relate to https://arxiv.org/abs/2112.00570, I was literally just emailing Alex and ccing @Sara Beery. We are getting some data out the door and maybe into torchgeo (@Caleb Robinson) spoke yesterday.

arXiv.org
👍 Sara Beery, Björn Lütjens
David (dwddao@gmail.com)
2022-02-04 21:53:56

*Thread Reply:* Some of the datasets included in the climate change benchmark are also listed in @Björn Lütjens’ repository but I believe the benchmark plans to be more than that (a full fledged environment to train and validate self-supervised algorithms). Alex mentioned it would be super cool to integrate your data @Ben Weinstein !! 📡

👍 Björn Lütjens, Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2022-02-04 21:56:33

*Thread Reply:* yup. We are actively talking about it now.

🎉 Björn Lütjens
Sara Beery (sbeery@caltech.edu)
2022-02-05 11:44:02

*Thread Reply:* @Chris Yeh Some of the SustainBench dataset (s) might also be relevant?

Björn Lütjens (bjoern.luetjens@gmail.com)
2022-02-05 13:13:56

*Thread Reply:* Amazing thanks!!! I added pointers to lila.science, iWildCam, iNaturalist, and SustainBench. LILA has two cool datasets on forest health and canopy height. I couldn't find directly relevant datasets on SustainBench, but provided a pointer. Thanks!!

Lmk, in case you hear of any good wildfire or belowground carbon datasets 🙂

❤️ Lily Xu
Sara Beery (sbeery@caltech.edu)
2022-02-05 13:21:03

*Thread Reply:* There is a wildlife/wildfire intersection dataset on Wildlife Insights, from Australia, looking at recovery after brushfires. But I don't think it's ML ready. https://www.worldwildlife.org/stories/an-eye-on-recovery

World Wildlife Fund
Björn Lütjens (bjoern.luetjens@gmail.com)
2022-02-05 13:27:43

*Thread Reply:* very cool! I couldn't find a link to the dataset, they might still be processing it. But I'll keep an eye out for this data, thx!

Lily Xu (lily_xu@g.harvard.edu)
2022-02-07 12:14:50

*Thread Reply:* I think @Carly Batist and @Gracie Ermi also have datasets listed in their Conservation Tech directory! https://conservationtech.directory/

conservationtech.directory
👀 David, Björn Lütjens
Chris Yeh (chrisyeh96@gmail.com)
2022-02-07 18:32:17

*Thread Reply:* > @Chris Yeh Some of the SustainBench dataset (s) might also be relevant? Re: @Sara Beery - none of the SustainBench datasets deal with wildlife, wildfire, trees, or carbon. We decided against wildlife because WILDS / iWildcam + lila + iNat already cover wildlife quite well, and we didn't have any in-house (Stanford SustainLab) expertise on the others. (We searched for trees / carbon datasets, but we didn't find any that seemed ready-to-go for inclusion. Maybe in a future version tho!)

👍 Björn Lütjens
🎉 Björn Lütjens
Björn Lütjens (bjoern.luetjens@gmail.com)
2022-02-14 15:49:44

*Thread Reply:* Thank you so much for the thoughts @Lily Xu and @Chris Yeh . I'll add a link to conservationtech.directory

😊 Lily Xu
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-02-15 07:07:57

*Thread Reply:* @Björn Lütjens We’ll add this forest dataset repo to the directory as well in our next update!

Björn Lütjens (bjoern.luetjens@gmail.com)
2022-02-17 19:38:54

*Thread Reply:* Amazing, thank you!

Silvia Zuffi (silvia@mi.imati.cnr.it)
2022-02-07 10:16:43

Hi all, we have published recently a report on some work we have been doing on fish sounds, I would be interested in some feedback and suggestions! https://arxiv.org/abs/2201.05013v2

arXiv.org
😍 Sara Beery, Justin Kay, Kewal Shah, Jason Holmberg (Wild Me), Jason Parham, Dhruv Sheth, Silvia Zuffi, Emilio Luz-Ricca, Björn Lütjens
👍 Ritwik
🐟 Björn Lütjens
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-02-09 03:39:55

Hi all! @Gracie Ermi & I are working on a new update to the Conservation Tech Directory -- we’ll now be including ‘r-package’ and ‘python-package’ tags and adding conservation tech-related packages as resources (for analyzing data from camera traps, passive acoustics, tracking/telemetry, eDNA, etc.). I’m an R person myself so I know a bunch of R packages but have only got a few Python ones I’ve come across. Wanted to access the hive mind and see if anyone knew of others to include in our next update that aren’t already on this list? Thanks in advance!

😍 Lily Xu, Declan, Justin Kay, Sara Beery, Talia Speaker, Jason Holmberg (Wild Me), Catherine Villeneuve, Ando Shah, Hannah Yin
👍 Benjamin Kellenberger, Björn Lütjens
Lily Xu (lily_xu@g.harvard.edu)
2022-02-09 09:02:11

*Thread Reply:* y'all are doing such a valuable thing thank youuuuu

🤩 Carly Batist
💯 Talia Speaker
👍 Benjamin Kellenberger
Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2022-02-09 15:21:59

*Thread Reply:* That’s an amazing list on your Web page indeed! I second @Lily Xu’s comment; thanks a lot for your work!

Regarding contents in the screenshot: I just noticed TensorFlow, which begs the question on how generic you allow entries to be. In that case you could open the box of Pandora with Python and add PyTorch, SciPy, Scikit-Image, sklearn, etc.

Otherwise I’ll just speak for a colleague and mention a must-have: DeepLabCut: https://github.com/DeepLabCut/DeepLabCut

Website
<http://deeplabcut.org>
Stars
2731
❤️ Lily Xu, Carly Batist, Sara Beery
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-02-09 23:44:11

*Thread Reply:* We actually just took TensorFlow (& some others like seewave) out because of being too ‘general’, we were debating where to draw the line and it gets hard! But we agree, thanks for your input!

👍 Benjamin Kellenberger, Sara Beery
Alayna Van Dervort (av@thebigwild.com)
2022-02-09 14:05:22

HI all, Is anyone working on Mountain Lion Individual Identification? We @thebigwild would be very keen to speak! av@thebigwild.com

👍 Ed Miller, Aarnav Sawant
Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2022-02-09 15:23:49

*Thread Reply:* Mountain Lion doesn’t seem to be on the list, but the go-to experts in animal re-ID for me are the folks at WildMe: https://www.wildme.org/#/ @Jason Parham @Jason Holmberg (Wild Me)

Jason Parham (bluemellophone@gmail.com)
2022-02-09 15:26:18

*Thread Reply:* We currently don’t have a Mountain Lion re-ID project, but we in discussion about starting a North America carnivore platform

:bearid: Ed Miller, Sara Beery
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2022-02-09 15:27:14

*Thread Reply:* We have never seen the baseline dataset of known IDs that would be the starting point of a future effort

Jason Parham (bluemellophone@gmail.com)
2022-02-09 15:27:43

*Thread Reply:* Agreed, @Alayna Van Dervort do you have a current Mountain Lion photo ID dataset?

Kevin Webb (ktwebb86@gmail.com)
2022-02-09 16:56:21

*Thread Reply:* I love this idea. Are you in touch with Beth Pratt (NWF, P-22 advocate) or Liz Hadly at Stanford?

Jason Parham (bluemellophone@gmail.com)
2022-02-09 16:58:42

*Thread Reply:* @Jason Holmberg (Wild Me)

Alayna Van Dervort (av@thebigwild.com)
2022-02-09 18:06:04

*Thread Reply:* Hi @Jason Holmberg (Wild Me) I will email you directly !

Jason Parham (bluemellophone@gmail.com)
2022-02-09 15:01:43

@Carly Batist Can you add a docker-image tag? We have Python packages at Wild Me, but we also have fully-configured and public Docker images with all of our code, plugins, required dependencies, and CUDA pre-installed.

😎 Jon Van Oast, Sara Beery, Carl Boettiger
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-02-10 02:44:56

*Thread Reply:* Hi Jason! We can absolutely add that term to your description so that if someone searches ‘docker’ WildMe will still show up, but we try to keep the tags to more generalized topic areas so that we don’t end up with dozens of them. Hope this is an ok solution!

Jason Parham (bluemellophone@gmail.com)
2022-02-10 13:40:11

*Thread Reply:* that works!

👍 Carly Batist
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-02-10 02:40:45

https://www.prnewswire.com/news-releases/terrapulse-launches-worlds-first-10-meter-resolution-global-forest-monitoring-platform-301477047.html

prnewswire.com
🌳 Sara Beery, Monty Ammar, Kevin Webb
Akronix (akronix5@gmail.com)
2022-02-11 02:47:55

https://kili-technology.com/blog/kili-s-community-challenge-plastic-in-river-dataset?utmcampaign=Sendingblue%20email&utmsource=sendinblue&utmmedium=email&hsenc=p2ANqtz-9QOpTyhp6MI470itulP62G32JVyIDxxeObqvLelL1EN9mXnsvRkf3Rl17wYcUfNxHujeH4|https://kili-technology.com/blog/kili-s-community-challenge-plastic-in-river-dataset?u[…]470itulP62G32JVyIDxxeObqvLelL1EN9mXnsvRkf3Rl17wYcUfNxHujeH4

kili-website
❤️ Kewal Shah
🙌 Emmanuel Dufourq, Björn Lütjens
Devis Tuia (devis.tuia@epfl.ch)
2022-02-16 09:02:09

Hey folks, I have a couple of PhD positions open in my lab at EPFL, if interested (or if you know interesting candidates) send me a message!

👍 Oisin Mac Aodha, Nico Lang, Omiros Pantazis, Yihang She, Sara Beery, Ben Koger, Monty Ammar, Björn Lütjens, Alexandre Tytgat
🏔️ Omiros Pantazis, Lily Xu, Sara Beery, Elijah Cole (Deactivated), Björn Lütjens
Devis Tuia (devis.tuia@epfl.ch)
2022-02-16 09:03:12

*Thread Reply:* https://www.epfl.ch/about/working/phd-position-mapping-wildlife-environment-interactions-in-the-swiss-alps-with-ai-wildai/ this is one

EPFL
Devis Tuia (devis.tuia@epfl.ch)
2022-02-17 05:06:47

*Thread Reply:* https://www.epfl.ch/about/working/phd-position-target-shift-in-species-distribution-modeling/ and this is the other!

EPFL
Josh Veitch-Michaelis (j.veitchmichaelis@gmail.com)
2022-02-18 11:18:55

Hi all, thanks very much to @Björn Lütjens for inviting me to this group. I've just joined ETH Zürich/Restor as a postdoc where I'm working on applying computer vision and ML to monitor forest restoration sites. I just got back from Antarctica as a Winterover for the IceCube Neutrino Observatory, but before that I was in Liverpool at LJMU working on real-time animal detection in thermal imagery. Otherwise I've also been working on and off with Frontier Development Lab putting ML into space for disaster response. My background is a hodgepodge of computer vision, remote sensing, space science, edge hardware and ecology so I'm always up for collaboration and conversation that mixes those domains. Nice to meet you all!

👋 Oisin Mac Aodha, Benjamin Kellenberger, Ben Weinstein, Stephanie O'Donnell, Kangyu Zheng, Akronix, Yihang She, Elijah Cole (Deactivated), Ankita Shukla, Omiros Pantazis, Ando Shah, Catherine Villeneuve, Dhruv Sheth, Monty Ammar, Emilio Luz-Ricca, David, Dan Morris, Lily Xu, Sara Beery, Sicily Fiennes, Alexandre Tytgat
🐝 Björn Lütjens, Dhruv Sheth, David, Lee Wall
Lee Wall (lmw17@my.fsu.edu)
2022-02-22 19:10:52

Hey everyone! I just graduated with my Bachelor's in applied math in December, and I've been growing more and more interested in both AI and the environmental field. Happened upon this group while scouring the Internet for projects, groups, etc. making organized efforts to connect people across fields to solve the challenges facing our planet-- and it seems that's exactly what this is! Really glad to have found y'all. Feel free to reach out if you wanna chat or anything!

👍 Omiros Pantazis, Mitch Fennell, Catherine Villeneuve, Lily Xu, Akronix, Benno Simmons, Lucia Gordon, Devis Tuia, Alexandre Tytgat, Lee Wall
Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2022-02-23 02:40:44

*Thread Reply:* Welcome! 😄

Sicily Fiennes (sicilyfiennes@gmail.com)
2022-02-23 15:33:01

Hi everyone! I am a PhD student and AI for Earth grantee working on using machine learning to detect species of birds in the wildlife trade. Any other PhD students using machine learning or running code on Microsoft Azure here? We’ve started a separate Slack channel, Ie. for researchers at a similar coding level and career stage. Please PM me or comment if you’d like to join!

🌳 Lily Xu, Jason Holmberg (Wild Me), Catherine Villeneuve
👋 Jon Van Oast, Benjamin Kellenberger, Devis Tuia, Alexandre Tytgat, Zac Winzurk
Lily Xu (lily_xu@g.harvard.edu)
2022-02-23 18:21:53

*Thread Reply:* I've got a project on Azure right now and would love to be added to this Slack channel! Thanks Sicily 🙂

👋 Sicily Fiennes
Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2022-02-24 02:24:48

*Thread Reply:* I have had my fair share with Azure too, also through my software, and would like to contribute to, and learn from, this channel, too! Aside that, we’re also working on (aerial) bird detection.

👍 Sicily Fiennes, Sara Beery
Sicily Fiennes (sicilyfiennes@gmail.com)
2022-02-24 07:17:16

*Thread Reply:* Hi both! Could you please share your emails via PM

Caleb Robinson (calebrob6@gmail.com)
2022-02-25 13:51:39

*Thread Reply:* Hi Sicily 👋 could you add me as well? I'm not a PhD student but do use Azure!

👍 Benjamin Kellenberger, Sara Beery, Lily Xu
Alayna Van Dervort (av@thebigwild.com)
2022-05-31 18:11:53

*Thread Reply:* Hello, Yes please I would love to be added, av@thebigwild.com

Lily Xu (lily_xu@g.harvard.edu)
2022-02-28 18:28:43

Hi everyone! We are excited to host Prof. @Carl Boettiger at the Harvard CRCS Social Impact Seminar series next week.

Carl Boettiger (UC Berkeley) will be talking about the intersection of AI and conservation for ecological monitoring, political ecology, and so on. See his full abstract/bio.

Monday, March 7, 11am ET Register for the talk

Carl is also happy to chat with folks afterwards! I've set up this sign up for a chat slot with Carl; please feel free to sign up, and invite others in your group/department. This can be 1-on-1, several members in your group, etc.

😍 Sara Beery, Akronix, Lee Wall, Ando Shah, Yuerou Tang
🙌 Marcus Lapeyrolerie, Akronix, Millie Chapman, Jessica Couture
👍 Benjamin Kellenberger, Casey Youngflesh
Oisin Mac Aodha (macaodha@caltech.edu)
2022-03-01 10:19:01

Hi everyone!

I have an open postdoc position on Machine Learning for Large Scale Biodiversity Mapping at the University of Edinburgh that might be of interest to some people here. I'll add more details in this thread. Don't hesitate to reach out if you have any questions.

I've also got a tweet here in case you would be willing to spread the word: https://twitter.com/oisinmacaodha/status/1498678226310287371

🙌 Sara Beery, Elijah Cole (Deactivated), Lily Xu, Ben Weinstein, Pietro Perona, Ando Shah, Omiros Pantazis, Shane Lubold, Subhransu Maji
Oisin Mac Aodha (macaodha@caltech.edu)
2022-03-01 10:19:27

*Thread Reply:* Summary: Postdoc in Machine Learning for Large Scale Biodiversity Mapping at the University of Edinburgh Duration: One year, starting from April 2022 (can be negotiated) More information and how to apply: https://edin.ac/3hutyUG Additional queries: Contact Oisin Mac Aodha - https://homepages.inf.ed.ac.uk/omacaod/

Details: The School of Informatics, University of Edinburgh invites applications for a research associate in the area of machine learning for large scale biodiversity mapping. The project is part of an exciting collaboration between the School of Informatics and the citizen science platform iNaturalist, and features input from the International Union for Conservation of Nature (IUCN).

Species range maps, describing where species do and do not occur, are critical to prioritizing scarce conservation resources in order to mitigate some of the worst potential impacts of climate change on biodiversity. Citizen science platforms such as iNaturalist generate millions of casual observations each year which contain information about where thousands of different species have been observed. However, constructing accurate species range maps is a time consuming and laborious process that is further hampered by data quality and coverage issues.

This project aims to develop novel machine learning-powered methods for producing plausible species range maps from citizen science data. The developed methods will enable the joint modelling of thousands of different species' distributions. These methods will make use of, and advance, the state of the art in deep learning-based density estimation in the context of noisy geospatial data.

For additional information please check our previous related project: https://edin.ac/3vvFyh2.

Dan Morris (agentmorris@gmail.com)
2022-03-03 18:57:16

Around a zillion months ago, on a thread on this channel, @Carl Boettiger suggested that we add some structured metadata to LILA to improve standardization and discoverability, and I said (paraphrasing) "great idea, I'm on it!", and then I did nothing about it for six months, but I finally got a chance to add schema.org markup to LILA pages, such that they are now searchable through, e.g., Google Dataset Search:

https://datasetsearch.research.google.com/search?src=0&query=lila.science&docid=L2cvMTFwenlwODAwYg%3D%3D

If anyone sees anything amiss with the metadata, let me know!

🙌 Caleb Robinson, Talia Speaker, Lily Xu, Mitch Fennell, Carl Boettiger, Jason Holmberg (Wild Me), Sara Beery
🎉 Jon Van Oast, Subhransu Maji, Carl Boettiger, Riccardo de Lutio, Jason Holmberg (Wild Me), Sara Beery
👍 Jon Van Oast, Eelke, Benjamin Kellenberger, Oisin Mac Aodha, Jessica Couture
Elijah Cole (Deactivated) (ecole@caltech.edu)
2022-03-03 22:59:28

https://www.nytimes.com/interactive/2022/03/03/climate/biodiversity-map.html

The New York Times
} By Catrin Einhorn and Nadja Popovich
👍 Benjamin Kellenberger, Riccardo de Lutio, Oisin Mac Aodha, Omiros Pantazis, Catherine Villeneuve, Kewal Shah, Casey Youngflesh, Suzanne Stathatos, Jessica Couture, Ted Schmitt, Sara Beery, Mark Roth
Monty Ammar (montyx23@gmail.com)
2022-03-06 05:44:40

Hi all, long shot, but I was wondering if anyone has come across any databases containing labelled gold-mines/ artisanal gold mines from Landsat 7 or 8/9/ Sentinel-2 satellite images?

Thanks 😊

Sara Beery (sbeery@caltech.edu)
2022-03-06 05:59:15

*Thread Reply:* @Laure Delisle what type of mines were you looking at?

👍 Monty Ammar
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-03-06 06:52:24

*Thread Reply:* You might ask the folks from the ConX artisinal mining challenge? I think some of the solutions were working on remote sensing of mines. https://www.artisanalminingchallenge.com/previous-round

👍 Sara Beery, Monty Ammar
Monty Ammar (montyx23@gmail.com)
2022-03-06 16:54:29

*Thread Reply:* @Carly Batist Thank you, i’m already in contact with them 🙂. While waiting to hear more information, I thought to come here and see if there are any platforms containing them that I’m missing out on.

👍 Carly Batist
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-03-07 03:25:20

https://techlaw.uottawa.ca/aisociety/alextrebek-postdoc-environment-2022

CDTS / CLTS uOttawa
👍 Sara Beery, Justin Kay, Lily Xu, Zac Winzurk
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-03-07 03:27:12

Another postdoc opportunity -

Lucia Gordon (luciagordon@college.harvard.edu)
2022-03-08 01:02:07

Hi everyone! I just created a new #petitions channel for us to share environment/conservation-related petitions we come across with the community. Please join if you want to use your virtual signature to influence policy and protect nature!

❤️ Kewal Shah, Lily Xu
Devis Tuia (devis.tuia@epfl.ch)
2022-03-08 10:15:36

Gentle reminder, one week left to apply for PhD positions in AI for conservation in beautiful Switzerland! https://www.epfl.ch/labs/eceo/eceo/open-positions/

EPFL
🙌 Stephanie O'Donnell, Lily Xu, Sara Beery, Dhruv Sheth
Dan Situnayake (daniel@edgeimpulse.com)
2022-03-09 13:00:51

Hi everyone! I work at Edge Impulse; we create tools to help developers build machine learning models for deployment to embedded devices. We're not a conservation tech company, but we do a ton of conservation-related work:

• We fund and mentor the WILDLABS Fellowship: On the edge • We organize community conservation tech efforts, like ElephantEdge • We work very closely with some amazing conservation tech organizations, like Arribada and CXL • We are regular speakers at community events like WILDLABS tech tutors • We donate 1% of our revenue to environmental nonprofits I'm posting here because we're looking for a machine learning engineer to join our team. You'd spend your time creating new technologies around machine learning on edge devices—including those that have the potential to directly benefit conservation research. For example, one big project we're working on right now is training embedding models for camera traps. If this type of thing sounds appealing, drop me an email at dan@edgeimpulse.com. Thank you!!

jobs.lever.co
Location
Amsterdam (Remote)
Team
ML
❤️ Stephanie O'Donnell, Talia Speaker, Dhruv Sheth, Benjamin Akera, Yihang She
👍 gvanhorn, Oisin Mac Aodha, Justin Kay, Sam Kelly, Daniel Davila, Benjamin Kellenberger, Dhruv Sheth, Carly Batist, Cameron Trotter
😍 Sara Beery, Dhruv Sheth
Angjoo Kanazawa (kanazawa@berkeley.edu)
2022-03-09 23:29:36

Hi all! We’re organizing the 2nd CV4Animals workshop at CVPR 2022! We’re inviting abstracts to present at the workshop (either by poster or oral), deadline April 29th, please see the website and attached PDF for more details! https://www.cv4animals.com/home

cv4animals.com
👍 gvanhorn, Jason Parham, Kakani Katija, Benjamin Kellenberger, Devis Tuia, Oisin Mac Aodha, Daniel Davila, Dhruv Sheth, Jason Holmberg (Wild Me)
:zebra_face: Jason Parham, Elijah Cole (Deactivated), Dhruv Sheth, Jason Holmberg (Wild Me)
🙌 Stephanie O'Donnell
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-03-10 03:27:44

💥Conservation Tech Directory update!💥

New feature - we’ve included conservation tech stats packages (python & R/rstudio tags). And now at 618 resources!🤩. We’ve also created a new resource type (Tool) & re-classified resources to this. Includes datasets, toolkits, software packages, etc. https://conservationtech.directory/ PDF version: https://doi.org/10.6084/m9.figshare.15442200. @Gracie Ermi

conservationtech.directory
😍 Dhruv Sheth, Lily Xu, Monty Ammar, Jason Holmberg (Wild Me), Sara Beery
🙌 Dhruv Sheth, Declan, Jason Holmberg (Wild Me), Emilio Luz-Ricca
👍 Benjamin Kellenberger, Jason Holmberg (Wild Me)
🎉 Jon Van Oast, Jason Holmberg (Wild Me), Yuerou Tang, Hannah Yin
💪 Monty Ammar
Chris Yeh (chrisyeh96@gmail.com)
2022-03-10 20:24:34

If you are interested in applications of machine learning (ML) for sustainability applications, we welcome you to attend the talks and presentations for the Computational Sustainability Doctoral Consortium Conference (CompSust DC 2022), which is held virtually tomorrow (Friday) and Saturday.

The conference program and schedule can be found here: https://www.compsust.net/compsust-2022/program.php

Zoom link: https://vanderbilt.zoom.us/j/4153638703?pwd=QjZ2UFlwdDRZanB2ODlZUDhEVFBwdz09 Meeting ID: Passcode: 2248

❤️ Ayan Mukhopadhyay, Sara Beery, Justin Kay, Dhruv Sheth, Subhransu Maji, Monty Ammar, Yihang She, Oisin Mac Aodha, Stephanie O'Donnell, Akronix, Caleb Robinson, Lily Xu
Akronix (akronix5@gmail.com)
2022-03-11 10:05:07

*Thread Reply:* How can we attend to the talks?

Ayan Mukhopadhyay (ayanmukg@gmail.com)
2022-03-11 10:39:17

*Thread Reply:* @Akronix Just use the zoom link to log in

Peter van Lunteren (contact@pvanlunteren.com)
2022-03-12 09:02:13

*Thread Reply:* Unfortunately I can’t make it on time. Will it be recorded?

Chris Yeh (chrisyeh96@gmail.com)
2022-03-12 10:57:57

*Thread Reply:* Yes, the talks are recorded. We will upload recordings to YouTube

Peter van Lunteren (contact@pvanlunteren.com)
2022-03-13 06:08:35

*Thread Reply:* Could you send me the link once available on YouTube? Thanks!

Chris Yeh (chrisyeh96@gmail.com)
2022-03-12 11:56:43

CompSust networking session on Gather.town for the next 2 hours! (12-1:15p ET + 45-min "lunch break") https://app.gather.town/events/g92MikIFkdovAgODSdwI

Chris Yeh (chrisyeh96@gmail.com)
2022-03-12 15:44:09

Talk by Victor Anton (founder & CEO of Wildlife.ai) in 15 minutes! https://vanderbilt.zoom.us/j/4153638703?pwd=QjZ2UFlwdDRZanB2ODlZUDhEVFBwdz09

Riccardo de Lutio (riccardo.delutio@geod.baug.ethz.ch)
2022-03-15 10:23:25

Hi everyone! Would anyone have recommendations for pretrained models for tree detection, or tree crown segmentation for urban trees? I’m aware of the DeepForest prebuilt model by @Ben Weinstein which looks amazing (but not specifically designed for urban trees) or the TreeTect models. We could of course also retrain a model but since we only have the coordinates of the trees and we don’t necessarily want to relabel bounding boxes or masks it’s maybe not the best way to go about it. Any other ideas? Thanks!

Devis Tuia (devis.tuia@epfl.ch)
2022-03-15 10:34:35

*Thread Reply:* For urban tree detection + species classification, there was the registree project led by Caltech and ETH (http://www.vision.caltech.edu/registree/)

vision.caltech.edu
Devis Tuia (devis.tuia@epfl.ch)
2022-03-15 10:35:00

*Thread Reply:* (but you should know it, actually 😉, now that i think about it )

Devis Tuia (devis.tuia@epfl.ch)
2022-03-15 10:40:16

*Thread Reply:* For tree crown segmentation, look at works by Antonio Ferraz or Martin Weinmann. They have been doing a lot in that direction

Riccardo de Lutio (riccardo.delutio@geod.baug.ethz.ch)
2022-03-15 11:12:38

*Thread Reply:* Thanks yes that’s definitely one option too! 🙂

Ben Weinstein (benweinstein2010@gmail.com)
2022-03-15 11:50:21

*Thread Reply:* You should think of DeepForest as a baseline model that you build from, I've heard a number of people add in urban tree detections. Just spend 2 or 3 days annotating and finetune and you should be in good shape.

Ben Weinstein (benweinstein2010@gmail.com)
2022-03-15 11:51:15

*Thread Reply:* i'm happy to host a urban tree model branch if gets up and running.

Riccardo de Lutio (riccardo.delutio@geod.baug.ethz.ch)
2022-03-15 12:01:59

*Thread Reply:* Great thanks!

Dhruv Sheth (dhruvsheth.linkit@gmail.com)
2022-03-16 05:11:10

*Thread Reply:* https://tejaswid.github.io/publication/2019-08-12-Automatic-Segmentation-Trees

Sundara Tejaswi Digumarti
👍 Riccardo de Lutio
Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2022-03-15 10:30:18

https://coomeslab.org/dr-thomas-swinfield/ - Tom swinfield has been working on this for the last few years, but it's in tropic forests rather than urban landscapes. He /his research group might have some pointers

Forest Ecology and Conservation Group
} coomeslab (https://coomeslab.org/author/coomeslab/)
🙏 Riccardo de Lutio
Riccardo de Lutio (riccardo.delutio@geod.baug.ethz.ch)
2022-03-15 10:31:28

*Thread Reply:* Thanks, I’ll look into that!

Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2022-03-15 10:32:23

*Thread Reply:* I also wonder if some of urban tree mapping citsci platforms have worked on something

Alexander Pfyffer (alexander.pfyffer@gmail.com)
2022-03-17 14:23:32

Hi everyone! 👋 I’m a social entrepreneur / software engineer looking for people interested in working in regenerative farming in developing countries. The idea is to connect the millions of smallholder famers to the international CO2 emission compensation market. The goal would be to use smartphone pictures / satellite imagery as a method to assess how much CO2 they are capturing (of course using AI 🙌). I’m planning to go to Ghana for the next few months to test my assumptions. Do you know people / research projects that are working on something similar?

💯 Chittesh T, Margaux Masson-Forsythe
Lauren Gillespie (gillespl@stanford.edu)
2022-03-18 18:15:58

*Thread Reply:* Hi Alexander, you should check out David Dao! He’s also doing a lot of work in carbon credit estimation and secure payment for smallholder farms on the carbon markets in the Global South, specifically Central and South America

Margaux Masson-Forsythe (margaux.masson21@gmail.com)
2022-03-21 16:18:38

*Thread Reply:* https://earthshot.eco/

earthshot.eco
Ando Shah (ando@berkeley.edu)
2022-03-25 19:39:24

*Thread Reply:* Also check out Regen Network https://www.regen.network/

regen.network
Ștefan Istrate (stefan.istrate@gmail.com)
2022-03-22 05:21:46

Happy to announce iWildCam 2022, our 5th annual camera trap challenge, focused on helping ecologists monitor biodiversity. Join the Kaggle competition and help us count individual animals across sequences of images.

https://www.kaggle.com/c/iwildcam2022-fgvc9

[Competition co-organized with @Sara Beery & @John Beuving, with valuable support from @Dan Morris.]

kaggle.com
🙌 Stephanie O'Donnell, Akronix, Justin Kay, Dan Morris
Vincent Miele CNRS (vincent.miele@univ-lyon1.fr)
2022-03-22 09:12:32

*Thread Reply:* Hi Stefan,The competitions are very exciting. However, as a non participant, I always have this question in mind about the IWILDCAM competition: besides knowing which team wins the competition, how can we actually know how this team proceed (algorithms, software, and so on)? In the open science era, it would be great to know... In any case, thanks so much for organizing this kind of event.

Sara Beery (sbeery@caltech.edu)
2022-03-22 11:50:40

*Thread Reply:* Vincent, in the discussion board for each competition the winning teams post their solutions! You can go to any of the past competitions and read about the winning solutions. We also have recorded videos from the last two workshops of a discussion of the competition-winning results for all the FGVC competitions

Sara Beery (sbeery@caltech.edu)
2022-03-22 11:53:54

*Thread Reply:* We're hoping to write up a larger "what we've learned" paper covering the past competitions and the themes of what does and doesn't work. Kaggle can't enforce teams publishing their code unfortunately (unlike LifeCLEF!) but some competition winners have published their code independently (like last year's winner here: https://github.com/alcunha/iwildcam2021ufam)

Stars
6
Language
Python
👍 Ștefan Istrate
Ștefan Istrate (stefan.istrate@gmail.com)
2022-03-22 11:55:58

*Thread Reply:* Competitions like this don't end with a deadline and a winner. Oftentimes participants (are encouraged to) share their solutions afterwards, in forums, papers or workshops. I recommend to everyone interested in learning more to revisit such competitions after a while, and I am sure new materials will become available.

For example, for iWildCam 2021 there was a FGVC workshop afterwards: https://www.youtube.com/watch?v=yKP_tX_gTk0, and participants shared full solutions (as Sara mentioned), or just descriptions of their solutions on the Kaggle forums:

1st place: https://www.kaggle.com/c/iwildcam2021-fgvc8/discussion/245460 2nd place: https://www.kaggle.com/c/iwildcam2021-fgvc8/discussion/245559 3rd place: https://www.kaggle.com/c/iwildcam2021-fgvc8/discussion/244950

YouTube
} FGVC Workshop (https://www.youtube.com/channel/UCp-X0QRcgfwBlkCmrF0Noqg)
Vincent Miele CNRS (vincent.miele@univ-lyon1.fr)
2022-03-23 05:26:32

*Thread Reply:* Thanks for these precise answers. I missed these information, sorry about that. And, again, thanks for organizing these competitions 👍

Beckett Sterner (bsterne1@asu.edu)
2022-03-22 10:49:40

On March 15, 2022, the U.S. National Science Foundation (NSF) held a Town Hall bringing together a multidisciplinary group of researchers to discuss this topic and to identify challenges ripe for this approach. Knowinnovation is a company specializing in accelerating scientific innovation - we are facilitating this program on behalf of the NSF.

The ideas from that Town Hall have been distilled into four topics which will serve as focal points for four separate workshops. Each workshop will consider how all the STEM disciplines (including biology, chemistry, computer sciences, engineering, geosciences, mathematics, physics, social, behavioral, and economic sciences) could be used to tackle a specific problem. All workshops will incorporate cross-cutting themes of diversity, equity, and inclusion and STEM education, training, and workforce development.

Participation in the workshop is by application only. Applications close on Tuesday March 29, 2022.

Apply using this link: https://app.smartsheet.com/b/form/f45ea9e3f86845818d05d272d1f9e604

Additional ‘incubator’ events will provide further engagement for postdocs attending the workshops. Whether or not you are able to participate, we strongly encourage and request you share this information with postdocs in your networks. Workshop Topics Workshop 1: Stewarding an Integrated Biodiversity-Climate System (April 14, 2022 11:00 AM - 5:00 PM EDT) We are learning the Rules of Life that govern the essential role of biodiversity in controlling function, maintenance, and adaptation of every ecosystem on Earth. We are also learning that biodiversity and climate are inextricably linked and that everything affecting one affects the other. How might these lessons help us to predict, preserve, and harness the benefits of biodiversity for human society and the natural world?

Workshop 2: Achieving a Sustainable Future (April 19, 2022 11:00 AM - 5:00 PM EDT) We are learning the Rules of Life that govern the complexity of interconnected living systems at multiple scales, e.g., from natural and synthetic cells to organisms, populations, communities, ecosystems, and the biosphere. As we learn more about the ways that living systems use and re-use natural resources, how might these lessons help us devise strategies to improve sustainability?

Workshop 3: Harnessing Microbiomes for Societal Benefit (April 21, 2022 11:00 AM - 5:00 PM EDT) We are learning the Rules of Life that govern the individual and collective metabolism, physiology, signaling, and interaction of different microbiomes, as well as their composition and responses to evolving environments. As we learn more about the roles of microbiomes in all living systems, how might these lessons help us to improve human society and the biosphere?

Workshop 4: Leveraging AI and Data for Predicting Mechanisms (April 26, 2022 11:00 AM - 5:00 PM EDT) We are learning the Rules of Life that govern the prediction of an organism’s observable characteristics from interactions of its genome with the environment. At the same time, novel research on artificial intelligence and data analytics is providing essential tools for integrating Rules of Life data. How might these lessons help us to improve our ability to use AI and Data Science?

Postdoc 'incubators' All postdoc participants will also be invited to attend a series of incubators to complement the workshops: • Kick-off Incubator (all selected postdocs): April 12, 1:00 PM to 4:00 PM EDT • Writing Incubator A: April 22, 11:00 AM to 4:00 PM EDT (for attendees of workshops 1 and 2) • Writing Incubator B: May 2, 11:00 AM to 4:00 PM EDT (for attendees of workshops 3 and 4) • Wrap-up Incubator (all selected postdocs): May 17, 11:00 AM to 4:00 PM EDT

😍 Sara Beery
Kevin Webb (ktwebb86@gmail.com)
2022-03-24 14:18:28

If anyone is doing any work around fisheries in the US or Canada, take a look at this forgivable loan program from Multiplier. Deadline April 15. https://multiplier.org/2022/03/announcing-the-technology-adoption-fund-for-sustainable-fisheries-and-the-inaugural-round-of-loan-funding/

Multiplier
🐟 Justin Kay
Fernando Pérez (fernando.perez@berkeley.edu)
2022-03-25 16:20:04

👋 everyone - I’m Fernando Pérez from UC Berkeley, and wanted to introduce myself as well as let you know of a new program we are just launching that I hope will be a great point of connection with this community: the Eric and Wendy Schmidt Center for Data Science and Environment at Berkeley, or ‘DS4E’ for short (our website is here).

The short version is that: • Our mission is impact-oriented, not academic research. We want to combine the expertise of environmental scientists, engagement with stakeholders close to the issue, and expertise in software engineering and data science, towards concrete, implementable, deployed tools that can contribute to real-world problems. • Our specific topics to focus our effort into are not yet pre-defined, and instead we will organize later this year (once we have some staff hired!) events to consult with the community how to best allocate our resources and work together for maximal impact. The above is only a launch press release, as we are currently in early stages of remodelling space, getting job ads posted, etc. But we wanted to connect with you all in the hope that you will be interested in opportunities to collaborate with us, apply for positions, participate in future events, etc.

I wanted to particularly thank @Sara Beery for kindly welcoming us to this space - we hope to meet many more of you over time!

From our launch team, both @Carl Boettiger and I are on this Slack and will be happy to answer any questions you might have. Thanks for reading!

Berkeley News
🔥 Daniel Davila, Sara Beery, Casey Youngflesh, Declan, Jason Holmberg (Wild Me), Mitch Fennell, Dhruv Sheth, Akronix, Yihang She, Lily Xu, Jinsu Elhance
👋 Daniel Davila, Kewal Shah, Sara Beery, Stephanie O'Donnell, Oisin Mac Aodha, Jason Holmberg (Wild Me), Emilio Luz-Ricca, Dhruv Sheth, Carly Batist, Elijah Cole (Deactivated), Devis Tuia, Lily Xu, Alex Brace, Jinsu Elhance, Angjoo Kanazawa, Ben Best
😍 Dhruv Sheth, Carly Batist, Lee Wall, Lily Xu, Jinsu Elhance
:thumbsup_all: Frederic Fol Leymarie, Ed Miller, Jinsu Elhance
🦜 Ben Best
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-03-26 06:48:31

*Thread Reply:* So cool!! I’ve added it in to our next update of the Conservation Tech Directory!

Akronix (akronix5@gmail.com)
2022-03-26 16:05:18

*Thread Reply:* Very exciting and interesting project! Looking forward to see the vacancies for new members of the team!

Lily Xu (lily_xu@g.harvard.edu)
2022-03-27 15:09:09

*Thread Reply:* Wonderful news and congrats on the launch! An inspiring mission for all of us passionate about impact-oriented work

Danielle Montocchio (montocds@mcmaster.ca)
2022-03-28 18:06:30

Hi everyone ! I am a PhD candidate at McMaster University, Ontario, Canada. As a part of my thesis, I study fish communities in Great Lakes coastal wetlands in Georgian Bay, Ontario. I have been attempting to refine and apply a remote underwater camera system to survey fish abundance, and species richness. I now have hours of footage to go through manually, and was hoping to find an automated method that would be less time-consuming. Ideally, the software could flag when there is a fish-occurrence that could be later processed by researchers.

The major issue I've encountered however, is the requirement of having significant coding language experience to use anything that has been done in the literature, rather than a user-friendly software with a front-end interface. I am not sure if anyone has a way of processing 15-minute action camera video clips in .mov file format (1050p 60 fps), but I thought I would reach out, just in case. Currently, I have 4 and half hours of footage processed manually, where 80 to 95% of the footage can be empty (just water and/or plants). Usually if a 15-minute clip is completely empty, I delete it, otherwise the video is unedited. I have annotations of fish occurrences, species identification and time-stamps for the processed videos saved in an Excel spreadsheet currently. In total, I have 540 hours of footage, with similar environments, but always slightly different backgrounds of vegetation. I have attached some sample images of occurrences if that helps at all. Again, I am very coding inexperienced so any direction/advice would be greatly appreciated!

👋 Sara Beery, Arjun Subramonian (they/them), Daniel Davila, Subhransu Maji
🎉 Jon Van Oast
Sara Beery (sbeery@caltech.edu)
2022-03-28 18:07:40

*Thread Reply:* @Kakani Katija sounds similar to some of the FathomNet data?

Daniel Davila (daniel.davila@kitware.com)
2022-03-28 18:27:55

*Thread Reply:* We maintain an open source DIY-AI tool for NOAA, called VIAME. Originally targeting fisheries but we've since expanded to a number of subsea applications. It supports a number of workflows such as detection, tracking, counting, etc... while also providing a means of labeling the data. The idea is for a scientists or decision maker to be able to train models without knowing anything about ml, or even code. Just bring the SME perspective and some time to label haha. It's been deployed at many of noaa's sites already and some of our collaborators at various uni and commercial companies have found it useful. It's completely free and open source, we dont make products or hold any IP at kitware. Please feel free to reach out if you'd like me to connect you to our program manager or engineering team.

😍 Sara Beery
🙌 Tarun Kumar Verma, Ando Shah
🎉 Jon Van Oast
Daniel Davila (daniel.davila@kitware.com)
2022-03-28 18:30:18

*Thread Reply:* https://github.com/Kitware/dive

Website
<https://kitware.github.io/dive>
Stars
42
Kakani Katija (kakani@mbari.org)
2022-03-28 19:12:25

*Thread Reply:* Yes, there's also the Tator annotation tool by CVision AI that has AI tooling as well: tator.io. Web-based and easy to get started.

Ben Weinstein (benweinstein2010@gmail.com)
2022-03-28 22:32:57

*Thread Reply:* I made a tiny GUI a few years ago 'motion detection only' for finding periods of activity. Nothing like what @Daniel Davila is doing in terms of sophistication. http://benweinstein.weebly.com/deepmeerkat.html

Dr. Ben Weinstein
Ando Shah (ando@berkeley.edu)
2022-03-29 00:41:25

*Thread Reply:* In case you want a full blown underwater camera trap (definitely overkill if you have access to continuous power), I worked on one for manta ray research in 2013 that was more for deeper reefs (~30m) : https://ando.xyz/work/manta-id

Ando Shah
🤩 Sara Beery
Malte Pedersen (mape@create.aau.dk)
2022-03-29 01:39:09

*Thread Reply:* Hi Danielle, I am not sure if it is applicable for your study, but we published a bounding box annotated underwater dataset a few years ago in murky waters. Maybe it can be used as training data for your system 🙂 https://www.kaggle.com/datasets/aalborguniversity/brackish-dataset

kaggle.com
Danielle Montocchio (montocds@mcmaster.ca)
2022-03-29 14:57:21

*Thread Reply:* Wow thank you so much everyone for your input and help! I have some more options to explore and see what works 🙂 Didn't think what I was doing was too novel and glad to see that it isn't

Kakani Katija (kakani@mbari.org)
2022-03-31 10:38:58

*Thread Reply:* @Danielle Montocchio If you're interested, we're having a workshop Thursday/Friday morning (Pacific) (starts in ~20 minutes) that walks people through resources like Tator and FathomNet. Registration link is here: www.tinyurl.com/fathomnet

Danielle Montocchio (montocds@mcmaster.ca)
2022-03-28 18:09:52

Sorry! Here are the images

Sara Beery (sbeery@caltech.edu)
2022-04-07 12:26:34

Some exciting improvements for iWildCam 2022!!!

https://twitter.com/sarameghanbeery/status/1512102482641444868

twitter
} Sara Beery (https://twitter.com/sarameghanbeery/status/1512102482641444868)
🎉 Jason Holmberg (Wild Me), Subhransu Maji, Jason Parham, Carly Batist, Mitch Fennell, Dan Morris, Jan Kees
❤️ Jon Van Oast, Justin Kay
💕 Jon Van Oast
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2022-04-07 12:52:20

*Thread Reply:* I hadn't realized this was the focus. This is awesome!

😁 Sara Beery
💯 Jon Van Oast
Oisin Mac Aodha (macaodha@caltech.edu)
2022-04-07 13:09:06

*Thread Reply:* Cool!

Dan Morris (agentmorris@gmail.com)
2022-04-07 18:17:07

*Thread Reply:* People ask us relatively often: "can I just count MD boxes above some threshold and use that to estimate the # of animals in an image?" We basically always say "no we don't recommend that", but I actually have no idea what the error distribution looks like, we just don't want to assume that it does anything remotely useful for counting. I know this competition is focused on sequence-level counting, but surely some enterprising competitor will make a baseline entry that just takes the per-image MD results that are on the competition page and the GT counts for the training data, and see what the numbers look like? And maybe that enterprising competitor will post a summary here? :)

😁 Sara Beery, Mitch Fennell
Sara Beery (sbeery@caltech.edu)
2022-04-07 18:18:09

*Thread Reply:* We might even put that on the leaderboard as a baseline (we did last year, but it had the complexity of species ID tied in) 🙂

🎉 Jon Van Oast
Dan Morris (agentmorris@gmail.com)
2022-04-07 18:19:02

*Thread Reply:* Yes, do that! I recommend the following advanced algorithm: { count = max(# of boxes) }

👍 Sara Beery, Jason Holmberg (Wild Me), Justin Kay
❤️ Jon Van Oast
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-04-11 07:58:24

🚨Conservation Tech Directory update!🚨

Lots more entries, but biggest update is that we now have a listserv you can subscribe to if you want direct notifications in your inbox of future directory updates! So we don’t have to keep spam’ing you on social media😅 Through the site: https://conservationtech.directory/ Directly via sign-up form: http://eepurl.com/hWwV0X

@Gracie Ermi

conservationtech.directory
❤️ Lily Xu, Catherine Villeneuve, Gracie Ermi, Sara Beery, Kakani Katija, Emily Charry Tissier, Jason Holmberg (Wild Me), Kai Waddington, Jinsu Elhance
🎉 Jon Van Oast, Ritwik, Akronix, Emily Charry Tissier, Jason Holmberg (Wild Me), Jinsu Elhance
Yves Bas (yves.bas@gmail.com)
2022-04-13 02:30:25

Now 55000 taxa in Inaturalist computer vision model: https://www.inaturalist.org/blog/63931-the-latest-computer-vision-model-updates

iNaturalist
🤩 Sara Beery, Nico Lang, Jason Holmberg (Wild Me), Emilio Luz-Ricca, Daniel Grzenda, Howard L Frederick
🤯 Oisin Mac Aodha, Justin Kay, Jason Holmberg (Wild Me), Stefan Schneider, Yuerou Tang
🎉 Sara Beery, Kakani Katija, Elijah Cole (Deactivated), Alex Brace, Dan Morris, Jason Holmberg (Wild Me), Jon Van Oast, Ben Best
👍 Jan Kees, Carly Batist
gvanhorn (grv22@cornell.edu)
2022-04-18 16:55:17

Sound ID 2.0 is now out for iOS devices! Android will be available next month. While the UI looks the same for 2.0, everything "under the hood" has been rewritten and optimized: • efficiency is 10 to 100x better, depending on iPhone version ◦ You should be able to start Sound ID and leave it running while on a walk, hike, bike ride, etc. Get out and explore! • precision and recall are up across the board for all species • we now cover 560 US/Canada species, and 250 Western Paleartic species ◦ Lots of additional species on the horizon! Hope you all enjoy! https://merlin.allaboutbirds.org/

Merlin Bird ID - Free, instant bird identification help and guide for thousands of birds
🎉 Sara Beery, Justin Kay, Declan, Oisin Mac Aodha, Tiffany Deng, Suzanne Stathatos, Lily Xu, Mitch Fennell, Kakani Katija, Ed Miller, Jon Van Oast, Nico Lang, Ali Johnston, Benjamin Hoffman, Sunnie S. Y. Kim, Stefan Schneider, Dan Morris, Elijah Cole (Deactivated), Carly Batist, Ritwik, Jason Holmberg (Wild Me), Noah Giebink, Sergei Nozdrenkov
🐦 Sara Beery, Oisin Mac Aodha, Casey Youngflesh, Stefan Schneider, Carl Boettiger
🙏 Jes Lefcourt
🎤 Subhransu Maji, Stefan Schneider, Noah Giebink
Declan (declan.pizzino@consbio.org)
2022-04-18 16:59:01

*Thread Reply:* Amazing. My wife and I have loved using this app while on walks and hikes. Thanks to everyone who's worked hard on this, both app and model-wise. Cool cool stuff

👍 gvanhorn
Oisin Mac Aodha (macaodha@caltech.edu)
2022-04-18 16:59:13

*Thread Reply:* Very cool!

Jes Lefcourt (jeslefcourt@gmail.com)
2022-04-18 17:02:09

*Thread Reply:* Totally! I was using it extensively just this weekend!

👍 gvanhorn
Ben Weinstein (benweinstein2010@gmail.com)
2022-04-18 17:32:15

*Thread Reply:* amazing. What was the main innovations that increased accuracy? Just more data?

gvanhorn (grv22@cornell.edu)
2022-04-18 17:34:02

*Thread Reply:* More data, smarter loss functions, smarter and more aggressive augmentation functions. We have a pretty good grip on annotating audio now and its allowed us to move way past weakly supervised methods.

Ben Weinstein (benweinstein2010@gmail.com)
2022-04-18 17:34:58

*Thread Reply:* (we are still at weakly supervised methods for trees). Good to hear.

gvanhorn (grv22@cornell.edu)
2022-04-18 17:35:24

*Thread Reply:* We also have a 1 week active learning cycle in place, so the annotators have a much better grip on where effort is helping, and where effort needs to be spent

Declan (declan.pizzino@consbio.org)
2022-04-18 17:35:54

*Thread Reply:* the models' ability to ID even with lots of background noise is pretty impressive

Ben Weinstein (benweinstein2010@gmail.com)
2022-04-18 17:36:02

*Thread Reply:* how do you find annotators? If I can be useful for connecting anyone in ecuador/columbia, let me know.

gvanhorn (grv22@cornell.edu)
2022-04-18 17:38:46

*Thread Reply:* Awesome, I'll keep that in mind! Currently we just tap into folk we know from prior experience. We've built out tool ecosystem around maximizing the efforts of a few experts instead of tapping into the big citizen science community. At some point we'll probably try to embrace some of that energy.

gvanhorn (grv22@cornell.edu)
2022-04-18 17:40:00

*Thread Reply:* @Declan Thank you! Yeah, that can be attributed to our more aggressive background augmentation strategies and to the detailed annotation protocol.

:the_horns: Declan
Ed Miller (ed@hypraptive.com)
2022-04-18 22:41:03

*Thread Reply:* I'm looking forward to the Android update. The app really helps my birding. When I can't tell where a call or song is coming from, the Merlin ID helps me know where to look!

👍 gvanhorn, Jon Van Oast
Jon Van Oast (jon@wildme.org)
2022-04-18 23:43:14

*Thread Reply:* this app is great! ......... would love to have a "mic trap" mode (is that a thing? like camera-trap but for audio?) where i could set this thing up outside and see what birds it picks up over the timeline of the day. 😄

👍 Jason Holmberg (Wild Me)
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-04-20 00:23:14

*Thread Reply:* Grab some Audiomoths 🙂

Ben Weinstein (benweinstein2010@gmail.com)
2022-04-23 12:30:43

*Thread Reply:* Just extensively tested this morning. The new version is really quite impressive. Huge congratulations. I love the highlight spectrogram and compare to call feature. I feel like this is a new high water mark for ecology machine learning and integration with existing tech.

💯 Jon Van Oast, Declan, Noah Giebink
Ben Weinstein (benweinstein2010@gmail.com)
2022-04-23 12:32:19

*Thread Reply:* Correctly caught a single western sandpiper in a flock of sanderlings from 40 feet amongst the heavy surf.

gvanhorn (grv22@cornell.edu)
2022-04-25 09:57:43

*Thread Reply:* Thanks Ben!

Priya Donti (priyald17@gmail.com)
2022-04-22 09:41:17

The Global Partnership on AI (GPAI) is launching a call for proposals to write an action-oriented roadmap for the responsible use of AI for biodiversity preservation, with proposals due May 10. See this post for more info: https://www.linkedin.com/feed/update/urn%3Ali%3Aactivity%3A6922846974228504576/

For context, GPAI is a government-recognized multi-stakeholder initiative (hosted through the OECD) that brings together leading experts from science, industry, civil society, international organizations, and government. Their goal is to provide governments and international organizations with actionable advice on the responsible use of AI.

I figure those in this group would be perfectly suited to write such a report, so I hope some of you will consider submitting to the RFP 🙂 I’m not directly affiliated with GPAI, but I and several colleagues wrote GPAI’s report on AI and climate climate last year, which we presented at the UN Climate Change Conference (COP26) - so happy to answer any questions from that perspective, or connect you to those in GPAI with more info!

🎉 Jon Van Oast, Sara Beery, Lily Xu, Kakani Katija
🤩 Sara Beery, Lama Saouma
Lama Saouma (lama.saouma@gmail.com)
2022-04-22 17:21:36

*Thread Reply:* Thanks for spreading the word Priya! Happy to answer any questions (I am the contact person in the RFP)

🙌 Priya Donti
Sara Beery (sbeery@caltech.edu)
2022-04-22 11:54:35

New season of WILDLABS virtual meetups centered around movement ecology!

https://twitter.com/WILDLABSNET/status/1517065606415142913

twitter
} WILDLABS Community (https://twitter.com/WILDLABSNET/status/1517065606415142913)
🤩 Olivier Gimenez, Oisin Mac Aodha, Jason Holmberg (Wild Me), Kakani Katija, Carly Batist
🎉 Jon Van Oast, Jason Holmberg (Wild Me), Carl Boettiger, Dan Morris
❤️ Talia Speaker, Catherine Villeneuve, Emilio Luz-Ricca, Stephanie O'Donnell, Léonard Boussioux
Björn Lütjens (bjoern.luetjens@gmail.com)
2022-04-26 08:26:15

Does anybody know PhDs, Postdocs, or Profs in Conservation, AI, and Polar Regions? We are organizing an NSF workshop and still have room for a speaker or two :))

🐻‍❄️ Sara Beery, Lily Xu, Daniel Grzenda
Holger Klinck (hk829@cornell.edu)
2022-04-26 08:29:09

Yes, Heather Lynch at Stoney Brook. She uses AI and sat images to monitor penguin colonies.

👍 Devis Tuia, Sara Beery, Lily Xu, aruna
Emily Charry Tissier (hello@whaleseeker.com)
2022-04-26 09:07:39

@Justine Boulent would be fantastic!

👍 Sara Beery
🐳 Justine Boulent
Akronix (akronix5@gmail.com)
2022-04-28 03:02:52

does anyone know if the WILDLABS session of yesterday was recorded: "Virtual Meetup: Data Collection in Movement Ecology"?

Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2022-04-28 05:27:06

It was!

👍 Ștefan Istrate, Oisin Mac Aodha, Sara Beery, Justine Boulent, Akronix, Carly Batist
🤩 Jon Van Oast
Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2022-04-28 05:27:26

We're processing it now - we'll post the full meetup and the individual talks to our youtube channel and wildlabs

Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2022-04-28 05:27:36

possibly today, definitely by early next week

Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2022-04-28 05:29:01

Also - the next session about data analysis is now open for registrations, it'll be of particular interest to this group. @Sara Beery is one of fantastic speakers

🙌 Akronix, Lily Xu, Dan Morris, Carly Batist
🎉 Jon Van Oast, Talia Speaker
Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2022-04-28 05:29:40

https://www.eventbrite.co.uk/e/virtual-meetup-data-analysis-in-movement-ecology-tickets-328359842127

Eventbrite
Jason Parham (bluemellophone@gmail.com)
2022-05-02 15:51:01

New animal ID research datasets posted in <#CMAFLU078|animal_re-id>

🎉 Jason Holmberg (Wild Me), Stefan Schneider, Declan, Justin Kay, Greg Lipstein, Carly Batist, Ankita Shukla
🐆 Stefan Schneider
🍾 Jon Van Oast
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2022-05-02 15:52:53

*Thread Reply:* "Leopards and hyenas and belugas. Oh my!"

Dan Morris (agentmorris@gmail.com)
2022-05-02 16:03:05

*Thread Reply:* I learned about this competition from the beluga data set:

https://www.drivendata.org/competitions/96/beluga-whales/

DrivenData truly has the market cornered on conservation puns. See:

https://www.drivendata.org/competitions/59/camera-trap-serengeti/

DrivenData
DrivenData
😅 Carly Batist
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2022-05-02 16:04:57

*Thread Reply:* I think that was physical pain I just felt with "Hakuna ma-data".

🤩 Jon Van Oast
Jason Parham (bluemellophone@gmail.com)
2022-05-02 16:06:04

*Thread Reply:* @Dan Morris “Whale Known Belugas”, “Belugas Got Back”, and “Fluke Box Hero” were also floated internally before the competition launched. One of the hardest things we had to do to get the competition set up was deciding on the best pun.

🙂 Dan Morris
😍 Declan
😂 Carly Batist
Dan Morris (agentmorris@gmail.com)
2022-05-02 16:06:57

*Thread Reply:* I'll make you a deal... if you do another competition next year, and you call it Fluke Box Hero, I will personally record the parody theme song.

👀 Jason Parham
🎸 Greg Lipstein, Mitch Fennell, Carly Batist
Jason Parham (bluemellophone@gmail.com)
2022-05-02 16:07:12

*Thread Reply:* Shoutout to @Greg Lipstein and the rest of the DataDriven team for making that decision really hard.

🎉 Jon Van Oast
Jason Parham (bluemellophone@gmail.com)
2022-05-02 16:07:41

*Thread Reply:* done

Nicholas Osner (nicholasosner@gmail.com)
2022-05-05 08:46:51

Hi everybody, I am Nicholas Osner from WildEye Conservation. Thanks @Dan Morris for inviting me. I look forward to being more involved in the community.

I would like to make you all aware of the project I have been working on called TrapTagger. It's a web application that uses AI to process camera trap data, and was developed in close collaboration with WildCRU - part of the University of Oxford's Zoology Department. It's open source and completely free to use.

At present, our species classifier is only capable of identifying Southern African species, with a full sub-Saharan model in the works. However, we do use MegaDetector to perform the job of blank-image removal, and we also have a heavily-optimised annotation workflow, so you can easily use our site as a nice GUI wrapper around MegaDetector for your particular biome. Moreover, we are open to training more biome-specific models if you have sufficient annotated data to do so (which you can easily generate through TrapTagger).

You can find more information on our website, including some more detailed reports in the documentation section. Additionally, you can find our repo here.

I would also like to draw your attention to another project we have in the works called the Elephant Survey System where we aim to both reduce the cost, and improve the accuracy of elephant surveys using a combination of AI and a light-aircraft-mounted camera rig, which you can read about here. If you would like to make use of the associated aerial elephant data set for your own purposes, we have available it here.

If you have any questions, please don't hesitate to get in touch with me.

Language
Python
Last updated
a month ago
🙌 Stephanie O'Donnell, Ștefan Istrate, Sara Beery, gvanhorn, Oisin Mac Aodha, Brandon Davis, Carly Batist, Omiros Pantazis, Justin Kay, Jason Holmberg (Wild Me), Talia Speaker, Dan Morris, Akronix, Emilio Luz-Ricca, Anton Alvarez, Sinan Robillard, Juan Arrechea
😎 Jon Van Oast
Ștefan Istrate (stefan.istrate@gmail.com)
2022-05-05 09:04:52

*Thread Reply:* Welcome and congrats for the well designed TrapTagger!

Carly Batist (cbatist@gradcenter.cuny.edu)
2022-05-05 09:19:55

*Thread Reply:* Awesome!! Will put this on our To-Add list for the next Cons Tech Directory update

Nicholas Osner (nicholasosner@gmail.com)
2022-05-06 04:35:02

*Thread Reply:* Thanks, it's much appreciated!

Ben Weinstein (benweinstein2010@gmail.com)
2022-05-10 12:37:40

Just bringing this conversation to the wider community. I am writing a tree species classification paper (120 species, full continent scale). The question from an ecology collaborator on reading the draft was that our accuracy of about 65% is lower than many machine learning papers in ecology. I'm copying my response below and welcome discussion from others. "One of my ongoing concerns for publication and presentations is the overall low accuracy rates. I have not done a complete survey of the literature by any means, but it seems to me that machine learning prediction rates are around 70%. Just wondering what your thoughts on if and how to deal with that." I think your point is true, but indicative of a problem rather than a benefit. There is a pernicious bias in the ecological machine learning literature to 'solving' problems. It is a bit of paradox because every introduction wants to set an applied problem up as being crucial, but every result section wants to show high accuracy values. The result is that there is a real epidemic of lax evaluation criteria. Either through a lack of data, or through weak rigor in setting up the problem that actually reflects the full breadth of the difficulty during prediction. The above figure is a perfect example of that. It is completely natural to do a random train/test split for each class and assess the accuracy per species. If you only had NEON data, you would be deceived into thinking that your accuracy when it comes time to make prediction on the full site scale is more than 150% higher than it actually is. The funny thing is that these problems seem to manifest mostly in the applied papers and analysis. Pure computer vision research that focuses solely on a specific theory doesn't have the pretense of solving any applied problem, and often has low evaluation scores because most problems that are worth solving 1) need lots of data, 2) are genuinely hard. For example, the famous mask-rcnn paper, which has nearly 18,000 citations in 3 years. The first analysis has an average precision of less than 0.4 (https://arxiv.org/abs/1703.06870). As I put together and outline and figures for the paper, my goal is to articulate that tree species prediction, when framed from the perspective of wanting to make predictions on the scale of tens of millions of trees, is vastly harder than the literature would have you believe. The ugly consequence of this dynamic is that applied ecologists become numb and distrusting to results in these papers, since every paper purports to have outstanding accuracy scores. For example in the tree crown delineation task, literature has been reporting greater than 85% accuracy for nearly 15 years, which really begs the question, why would people keep writing papers about such an easy and already solved problem? The only real conclusion is that the analysis as constructed doesn't reflect the actual needs of the applied workflow, but rather makes assumptions either due to a lack of data, or the desire to have higher values. We can show that our model is 2.5x better than a standard off the shelf approach, I think is the crux of it.

arXiv.org
👏 Dan Morris, Ando Shah, Declan, Kirsten Crane, Riccardo de Lutio, Omiros Pantazis, Justine Boulent, Sara Beery, Paige Ngo, Kasirat, Emilio Luz-Ricca, Elijah Cole (Deactivated)
👍 Justin Kay, Sara Beery, Rita Pucci
Dan Morris (agentmorris@gmail.com)
2022-05-10 14:25:04

*Thread Reply:* Mic drop! Really well-written. I think there are really two issues here: (a) artificial inflation of accuracy numbers via (IMO usually unintentional) questionable evaluations (which you mention), and (b) an assumed coupling of "accurate" and "useful". Addressing (b) likely requires us to start incorporating more human subjects studies, and also some less-quantitative, more anecdotal case studies about whether/how/why AI systems are actually getting used/abandoned. Exciting that some of those have started to roll out in the last couple of years.

🙌 Mitch Fennell
Daniel Davila (daniel.davila@kitware.com)
2022-05-10 14:34:19

*Thread Reply:* "machine learning prediction rates are around 70%" - is this for a specific, well studied task or benchmark? It should be noted that you cant generalize performance like that across datasets. A reasonable score on e.g. coco isnt necessarily a reasonable score on crowdhuman. And when you talk about the constraints of most real world problems, namely that there is usually a significant lack of relevant data, you can usually take SOTA and chop it in half for out of the box performance (without significant engineering around the ML model itself to boost performance)

Daniel Davila (daniel.davila@kitware.com)
2022-05-10 14:35:35

*Thread Reply:* That being said, what's right here may be beside the point of whether or not you are queuing up an uphill battle with reviewers. If the magic number for a "good" paper is 70%, or 85%, or whatever, that's as good a reason as any to get rejected these days

🎉 Ben Weinstein, Sara Beery
Carl Boettiger (cboettig@berkeley.edu)
2022-05-10 15:11:52

*Thread Reply:* @Ben Weinstein Thanks for raising this issue, definitely resonates.

I think it's worth pointing out how this illustrates a very stark difference between ecology literature and ML literature regarding benchmarks. I believe these two fields each sit at relatively opposite extremes in this regard, while the ideal is probably somewhere in the middle.

The reviewer makes no reference to a specific benchmark (or apparently even any examples) in quoting 70%. In doing so, the reviewer is quite consistent with how we publish methods in ecology - everyone evaluates their method wrt their own data rather than a shared benchmark.

In contrast, I gather most ML venues would never consider a methodological advance in accuracy that was not tested against some existing benchmark datasets. No one cares about your ability to discriminate cats and dogs in just any collection of cats and dogs photographs, it has to be ImageNet. (experts here can fill in the blanks better, sorry -- there was an excellent thread in #random a while back on this I think).

I think we can all agree both extremes are not particularly productive -- at the end of the day we're trying to solve an abstract problem, not over-fit a benchmark. But for all that we love to celebrate the diversity and system-specificity of ecology data, we can often be too loose in what we consider a comparable task: should we really consider accuracy of ML identifying trees from space is essentially comparable to the performance we expect from other ML literature in say, identifying mammals from photographs?

👍 Sara Beery, Mitch Fennell
Pietro Perona (perona@caltech.edu)
2022-05-14 14:09:24

*Thread Reply:* Nice discussion. Sometimes there is need for a study that identifies specific challenges that algorithms face and labels datasets in ways that allow you to benchmark against those challenges. You will find two examples https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5975165|here and here. If you guys are interested in doing this for tree species identification I will be happy to have a discussion with you to help bring clarity to this area. One more suggestion: iNaturalist is a natural benchmark and you should be able to use the iNaturalist API to benchmark your algorithm against iNaturalist on the iNaturalist dataset. @gvanhorn will be able to tell you if this is feasible.

ieeexplore.ieee.org
Pietro Perona (perona@caltech.edu)
2022-05-14 14:13:04

*Thread Reply:*

Ritwik (rittyun@yahoo.com)
2022-05-20 09:15:41

*Thread Reply:* sorry for coming to this so late.. thanks for raising this topic and for the excellent well rounded argument.. the origins of this sentiment seem to be the trickle down effect of the publication bias in journals for so called "striking" work expected by editors. As for evaluation, i think it's crucial to test the models on "out of distribution" data. we all develop these models in the future hope of them being applied in real life usage, while the standard train-test splits can be far away from what the model will actually see.. it can be hard to get relevant out of distribution data but it gives a more honest expectation of how the model is expected to react to real life.

💯 Carl Boettiger
Sara Beery (sbeery@caltech.edu)
2022-05-11 10:00:08

Starting in one hour!

https://twitter.com/WILDLABSNET/status/1523690388581920769

twitter
} WILDLABS Community (https://twitter.com/WILDLABSNET/status/1523690388581920769)
🙌 Omiros Pantazis, Stephanie O'Donnell, Oisin Mac Aodha, Yihang She, Thijs, Declan, Lily Xu, Mitch Fennell, Talia Speaker
Thijs (thijs@q42.nl)
2022-05-11 10:44:36

*Thread Reply:* Too bad I can't join, is there a recording later on?

Sara Beery (sbeery@caltech.edu)
2022-05-11 10:47:06

*Thread Reply:* Yes! They always post recordings after, I'll share the link when it's up!

Thijs (thijs@q42.nl)
2022-05-11 10:47:55

*Thread Reply:* Awesome, have fun!

Sara Beery (sbeery@caltech.edu)
2022-05-11 11:42:03

Welcome to everyone joining from the WILDLABS movement ecology meetup! Feel free to introduce yourselves 🤩

👋 Lily Xu, Kenady Wilson, Stephanie O'Donnell, Jason Holmberg (Wild Me), Paige Ngo, Sophia Abraham, Ted Schmitt, Sinan Robillard, Jason Parham, Fagner Cunha, Carly Batist
🎉 Jon Van Oast, Stephanie O'Donnell, Jason Holmberg (Wild Me), Talia Speaker, Sophia Abraham, Carly Batist
👍 Oorjit Mahajan, Stephanie O'Donnell, Jason Holmberg (Wild Me), Sophia Abraham
Alex Borowicz (alex.borowicz@stonybrook.edu)
2022-05-12 12:27:46

Is there anyone in the NYC area who'd be interested in joining forces to put together a workshop proposal for the Student Conference on Conservation Science in October? Deets on the conference here, but in short it's a really awesome and supportive little conference that's all about building up undergrad and grad students in conservation. And it's at the American Museum of Natural History which makes it super fun.

I'm envisioning a pretty basic workshop aimed at getting participants with absolutely no background in AI familiar with what's out there that's easy to use/get started with to help with research they're already doing/planning, how to start thinking about data from a computer's perspective, some basic demonstrations

❤️ Lily Xu, Sara Beery, Monty Ammar, Carly Batist
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-05-24 20:19:21

*Thread Reply:* Hi! I'm a PhD student at CUNY and love that conference! It would be a great workshop opportunity for sure

Kasirat (kasirat_turfi@hotmail.com)
2022-05-13 04:02:46

Is anyone here working with forest/tree canopy image data? I am using Mask-RCNN to delineate the canopies, the dataset is very small. Any tips or tricks would be very appreciated. If anyone has applied Mask-RCNN on their tree canopy image data, how did it perform? Generally with a pretrained model (on COCO) as a starter, it is expected that the results would be quite acceptable, but my model performance is nowhere near "acceptable". Specially in segmenting the canopies, detection is okay, but getting many overlapping boxes generated for the same canopy.

Devis Tuia (devis.tuia@epfl.ch)
2022-05-13 05:12:47

*Thread Reply:* Especially if the forest is broadleaf it is very difficult to make single trees segmentation from images. We were working on that in the past (segmenting single trees from images or Lidar scans) and lidar was very very useful

Devis Tuia (devis.tuia@epfl.ch)
2022-05-13 05:15:35

*Thread Reply:* even by eyes often you don’t see where a tree finishes and where the next one starts and forest is inherently 3D. It’s a though problem, there are a lot of papers in journals like Remote Sensing of Environment and entire confs like silvilaser or forestsat

Kasirat (kasirat_turfi@hotmail.com)
2022-05-13 05:26:59

*Thread Reply:* Yes, it is very difficult to even tell apart one tree from the other by human eyes (I couldn't do it for the life of me). Looking for ways to find a solution. I am not looking for over 90% accuracy, anything over 70% would be good. Do not have high hopes with Mask Rcnn on crowded trees.

Daniel Davila (daniel.davila@kitware.com)
2022-05-13 10:01:28

*Thread Reply:* One idea would be to bootstrap a larger training set with someone else's model. Have you looked into some of the open source tree inventory models out there? TreeTect comes to mind, there are a few others too.

https://github.com/krakchris/TreeTect

Is the goal to discern individual trees or just segment out the region in the image that is or isnt tree?

Ben Weinstein (benweinstein2010@gmail.com)
2022-05-13 10:04:47

*Thread Reply:* I can't claim to be an expert because data vary so widely, but we have spent several years on the problem. A couple observations. 1) Because there is massive intra-class variance inside of 'tree', you need alot of training data, especially if you want to generalize to large forests, or across geographies. Pretraining is key (https://www.sciencedirect.com/science/article/pii/S157495412030011X). 2) Really consider if you need semantic segmentation through mask rcnn or if bounding boxes are sufficient. We have found that the extra effort for training a pixel level model is not worth it. Many trees can be approximated by 4 point boxes and that does not inhibit downstream analysis. It allows you to annotate many more images faster. We use torchvision's retinanet and have released a pretrained model that I'd love you to try. Finetuning it to your data usually works well. https://deepforest.readthedocs.io/. 3) This one is a bit controversial, but outside of using the unsupervised LiDAR algorithms for pretraining in the RGB, i have never seen a LiDAR algorithm that outperforms RGB alone. The caveat here is that i think it depends alot on LiDAR point density. The majority of LiDAR papers use hundreds of points per meter through intense data collection of a small area. When you want to monitor millions of trees, this level of data collection is usually not possible. NEON LiDAR data averages about 4 points per meter over large areas. At this scale, the LiDAR is not that useful, and a canopy height model is often helpful. I do think any multi-spectral data may be useful, we have shown it helps in tropical forests (https://ieeexplore.ieee.org/abstract/document/9387530). 4). If I had to put money on it, the future is in massive pretrained models, after significant research on scaling across resolution. What resolution is your data?

sciencedirect.com
ieeexplore.ieee.org
🌟 Carl Boettiger
👍 Kasirat
Ben Weinstein (benweinstein2010@gmail.com)
2022-05-13 10:08:52

*Thread Reply:* We have annotations here as well. https://github.com/weecology/NeonTreeEvaluation If you would like to use them for your method.

Stars
68
Language
Python
Devis Tuia (devis.tuia@epfl.ch)
2022-05-13 10:29:53

*Thread Reply:* +1 for the Lidar point cloud argument, with a few returns per square meter it get complicated. to segment at scale

👍 Sara Beery
Jon Van Oast (jon@wildme.org)
2022-05-13 13:39:48

*Thread Reply:* @Jason Holmberg (Wild Me) - werent we just talking about this a couple days ago?

Ben Weinstein (benweinstein2010@gmail.com)
2022-05-13 14:09:54

*Thread Reply:* https://www.mdpi.com/2072-4292/14/6/1317 here is an example of that LiDAR challenge. A extremely thorough well done paper, probably a year's worth of diligent work. Is only about 5% better than a completely off the shelf RCNN that you could run in a couple days. I would be willing to guess that if that's true, than the pretraining we've done on deepforest is probably equal to that, and this was in higher resolution LiDAR data. Its strange, and I do hope that eventually a massive, multi-sensor with RGB/CHM/HSI pretrained algorithm will outperform, but I think the labels are lacking to truly evaluate it.

MDPI
Kasirat (kasirat_turfi@hotmail.com)
2022-05-13 20:37:49

*Thread Reply:* @Daniel Davila thankyou for this link, I have not seen it before. I am working with forest data, my search has been limited, haven't thought about urban tree data.

Kasirat (kasirat_turfi@hotmail.com)
2022-05-13 20:51:58

*Thread Reply:* @Ben Weinstein I have been pouring over this new paper last few days, the one you have linked here on urban individual tree detection from lidar point cloud.

I have not started working on this project too long ago, new to lidar point cloud. My actual data is ALS point cloud, and I want to segment individual trees. There hasn't been a ton of study on application of deep learning to segment individual trees in point cloud. I am going about it systematically, trying out the projection based method, ie. taking the bird's eye view, getting the 2D raster, applying image based instance segmentation. I had hoped that masks would better delineate a tree than bounding boxes. However, at this stage, I can appreciate your conclusion on masks not really improving on bounding box situation, specially when the trees are dense. I may have to abandon the Mask-RCNN idea, and look at 3D object detection/3D instance segmentation algorithms. Another incentive to first start with a Mask-RCNN was to establish a base-line, to compare with another approach.

Kasirat (kasirat_turfi@hotmail.com)
2022-05-13 20:55:34

*Thread Reply:* @Jon Van Oast if you and other people are working on point cloud of forests, would love to know more about your work! I am interested in "point cloud", "deep learning" and "segmentation"!

👍 Jon Van Oast
Jon Van Oast (jon@wildme.org)
2022-05-13 20:58:20

*Thread Reply:* @Kasirat - at this point we are just at the very beginning of wondering what work is out there. so this thread is great!

👍 Kasirat
Fernando Pérez (fernando.perez@berkeley.edu)
2022-05-13 18:56:47

Hi everyone! You may have already seen this, but in case it didn’t make it on your radar - our DS4E Initiative at Berkeley is hiring an executive director!

I hope some of you may find this position interesting, and please feel free to share with colleagues. You can find more about the initiative here, but I’m happy to answer any questions you might have.

Nature Careers
Berkeley News
‼️ Lily Xu, Sara Beery
💚 Lily Xu, Suzanne Stathatos, Sara Beery
Monty Ammar (montyx23@gmail.com)
2022-05-15 16:49:47

Hey everyone, im at the University of Kent doing a masters in Conservation Biology.

I’ve been working on a review titled: “Machine Learning in Conservation Science: a review”. It started out as a uni project, but I’m gathering some academics from our department who have experience in ML & conservation to collaborate and I aim to submit to ‘Biological Conservation’ for publication by the end of the summer.

I know there are already a number of great reviews (@Devis Tuia @Ben Weinstein @Sara Beery @Benjamin Kellenberger and im sure some other authors on this slack too) on ML in ecology and a fantastic perspectives paper in Nature. I envisage this review article covering a slightly different angle than the ones mentioned. Mainly because it will focus on the progress of ML in conservation science holistically rather than individually the ecology, deep learning, CV, or SDM applications. E.g. tracking illegal wildlife trade on social media/internet, optimising ranger patrols, automated biodiversity assessments.

The aim is to inform practitioners and researchers from all corners of conservation science about the range of problems ML is tackling in this field now.

I’d like to Invite people from here with experience in ML in conservation science (i.e bioacoustics, CV, decision making/ management optimisation etc) to collaborate on the review by reviewing a few studies and writing a couple paragraphs on them to go in the manuscript!

Please reach out to me if you want to help! It seems crazy to write a review on this topic and not get input from this group! 😁

🙌 Sara Beery, Declan, Ted Schmitt, Merlin Bleile, Carly Batist, Sinan Robillard
👍 Rita Pucci
Ted Schmitt (teds@allenai.org)
2022-05-16 19:17:34

*Thread Reply:* Please, please be sure to post a link to the review here once it is published.

👍 Monty Ammar, Sara Beery
Monty Ammar (montyx23@gmail.com)
2022-05-16 20:13:39

*Thread Reply:* Definitely will @Ted Schmitt 👌

Devis Tuia (devis.tuia@epfl.ch)
2022-05-17 07:16:49

*Thread Reply:* Sounds like a relevant review, very complementary to our perspective paper! If you want have a chat on do’s and don’ts when writing a paper like this, just PM me!

❤️ Monty Ammar
Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2022-05-17 08:04:49

*Thread Reply:* I second that; happy to talk about it!

❤️ Monty Ammar
Sara Beery (sbeery@caltech.edu)
2022-05-17 10:18:32

*Thread Reply:* Agreed 🙂

❤️ Monty Ammar
Monty Ammar (montyx23@gmail.com)
2022-05-17 12:32:29

*Thread Reply:* Perfect. 🙏

Victor Anton (victor@wildlife.ai)
2022-05-24 23:38:44

*Thread Reply:* @Monty Ammar it's not a scientific article but sharing here a recent report on the use of AI for the NZ environment in case it's of any relevance for your review https://aiforum.org.nz/2022/05/18/ai-for-the-environment-in-aotearoa-new-zealand/

AI Forum
Written by
AI Forum
Est. reading time
3 minutes
Monty Ammar (montyx23@gmail.com)
2022-05-25 05:17:01

*Thread Reply:* Thank you @Victor Anton 🙂

👍 Victor Anton
Lukas Picek (lukaspicek@gmail.com)
2022-05-17 05:57:53

Dear All,

I want to invite you to participate in the LifeCLEF 2022 Snake ID Challenge! We need humans (novices and experts) to identify the same images to compare humans with AI-based algorithms. This project will help you practice your snake ID skills and will also provide valuable data on which species are most commonly misidentified as which other species.

Through June 15th, you will identify 150 images of snakes from around the world. We'll publish bios of the top 3 IDers, and the top IDers get a book of their choice.

If you decide to join, you will be shown a picture of a snake and its country of origin. Your task is to recognize the snake species. Use of google search or citizen science platform to identify it is expected. Any existing "tool: is allowed with one exception -- Machine Learning-Based systems, e.g., Google Image Search, iNaturalist recognition tool, HerpMapper recognition toll etc.

More detail and link to the snake ID page at the end of the consent form/survey: https://forms.gle/awongX8Fk3tX2zGG6 Please be sure to use the same email address for the consent form and the lab.citizenscience.ch website where the snake IDs are so that we can match your IDs with your responses to the survey.

Thank you for considering your participation!

👍 Oisin Mac Aodha, Leonardo Viotti, Dan Morris
🐍 Sara Beery, Lukas Picek, Carly Batist
Rita Pucci (rita.pucci85@gmail.com)
2022-05-18 11:04:10

Hi everyone I am Rita from the University of Udine (north Italy), I have studied ML and I am almost new to the world of interdisciplinary works in the field of Computer Vision and Conservation. I really would like to be more in this world so here I am 😄 Hope to contribute to this field and collaborate for the best! Nice to meet all of you, I know some of you from literature and from social media (well.. it is true..). I am willing to collaborate and in helping other researchers if I can!

👋 Sara Beery, Felipe Parodi, Jason Holmberg (Wild Me), Devis Tuia, Fagner Cunha, Omiros Pantazis, Monty Ammar, Jon Van Oast, Akronix, Ritwik, Benjamin Kellenberger
Greg Lipstein (greg@drivendata.org)
2022-05-18 15:28:01

Hi all! - If anyone is looking for an accessible way to learn or practice with computer vision for conservation, a couple data scientists on our team just put out a practice wildlife image classification challenge with some camera trap images and 8 classes.

The data comes from partners at the Wild Chimpanzee Foundation and the Max Planck Institute for Evolutionary Anthropology. There is also an introduction to image classification using camera trap images blog post that walks through an initial approach to the challenge.

Please feel free to share with any students, data enthusiasts, etc. who might be interested in getting some practice in this field! Thanks!

DrivenData
❤️ Suzanne Stathatos, Felipe Parodi, Sara Beery, Fagner Cunha, Stephanie O'Donnell, Dongmin (Dennis) Kim, Rita Pucci, Carly Batist, nyakundi lamech
😎 Jon Van Oast, Sara Beery
👏 Dan Morris
Reece Rhinehart (rhin0098@pacificu.edu)
2022-05-18 16:27:12

Thanks for this!

Sara Beery (sbeery@caltech.edu)
2022-05-19 10:47:35

@gvanhorn is giving a talk today on Merlin Bird ID!!

https://ecornell.cornell.edu/keynotes/overview/K051922/?utm_content=buffer6bab8&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer

eCornell
🐦 Oisin Mac Aodha, Andrés C Rodríguez, Justin Kay, Sunnie S. Y. Kim, Jason Parham, Sophia Abraham
👍 Kakani Katija, Jason Parham, Dan Morris, Sophia Abraham, Victor Anton
🎉 Jon Van Oast, Suzanne Stathatos, Sophia Abraham
Suzanne Stathatos (suzanne.stathatos@gmail.com)
2022-05-20 13:36:55

Our slack founder, @Sara Beery, defended her thesis today 🎉 I could not be more proud of all she's done and excited to see where she’ll go!

💕 Jon Van Oast, Declan, Lily Xu, Stefan Schneider, Jason Holmberg (Wild Me), Sophia Abraham, Daniel Davila, Lauren Gillespie, Chris Yeh, David, Elijah Cole (Deactivated)
🍾 Jon Van Oast, Lily Xu, Ando Shah, Ștefan Istrate, Stefan Schneider, Jason Holmberg (Wild Me), Omiros Pantazis, Frederic Fol Leymarie, Peter Bull, Casey Youngflesh, Akronix, Sophia Abraham, Nikhil Vytla (he/him), Daniel Davila, Dhruv Sheth, Mitch Fennell, Peter van Lunteren, Talia Speaker, Lauren Gillespie, Ted Schmitt, Chris Yeh, Carly Batist, Sicily Fiennes, David, Anton Alvarez
🎉 Jes Lefcourt, Declan, Justin Kay, Kakani Katija, Thor Veen, Justine Boulent, Lily Xu, Beckett Sterner, Stefan Schneider, Avi Sundaresan, Jason Holmberg (Wild Me), Sunnie S. Y. Kim, Olivier Gimenez, Emily Huong, Fagner Cunha, Peter Bull, Alex Brace, Yihang She, Timm Haucke, Grace Hansen, Matt Weldy, Dan Morris, Sophia Abraham, Nikhil Vytla (he/him), Daniel Davila, Dhruv Sheth, Hemal Naik, Thijs, Riccardo de Lutio, Rowan Converse, Lucia Gordon, Kasirat, Kewal Shah, Emilio Luz-Ricca, Marta Skreta, Nicholas Osner, Lloyd Hughes, Talia Speaker, Joanna Turner, Lauren Gillespie, Stephanie O'Donnell, Dongmin (Dennis) Kim, Carly Batist, Jacob Kamminga, Ritwik, Sachith Seneviratne, David, Cody Kupferschmidt, Juan Sebastián Cañas Silva
❤️ Monty Ammar, Ben Weinstein, Sophia Abraham, Nikhil Vytla (he/him), Daniel Davila, Talia Speaker, Chris Yeh, David, Juan Sebastián Cañas Silva
👍 Matt Hron, Ivory lu, Chris Yeh, David, Barry Brook
Sara Beery (sbeery@caltech.edu)
2022-05-20 14:17:45

*Thread Reply:* Thank you!!!!

Stefan Schneider (sschne01@uoguelph.ca)
2022-05-20 14:21:23

*Thread Reply:* Congrats Sara! Your defence was a great ride through all the amazing work you've done! And only the start of an incredible career ahead 😄

❤️ Sara Beery
Kakani Katija (kakani@mbari.org)
2022-05-20 14:23:37

*Thread Reply:* Congratulations!!!! Can't wait to hear what's next.

❤️ Sara Beery
Alex Borowicz (alex.borowicz@stonybrook.edu)
2022-05-20 15:12:07

*Thread Reply:* Such a good talk! Congratulations, Doctor!

❤️ Sara Beery
Chris Lang (chrislang@ucsb.edu)
2022-05-20 15:36:54

*Thread Reply:* Congratulations Sara!! I had to leave at 10 but any chance you can share the recording?

Sara Beery (sbeery@caltech.edu)
2022-05-20 15:40:08

*Thread Reply:* Yes! I'll post the recording :)

👍 Casey Youngflesh
🙌 Nikhil Vytla (he/him), Talia Speaker
Monty Ammar (montyx23@gmail.com)
2022-05-20 17:33:39

*Thread Reply:* Congratulations Sara 👏

Holger Klinck (hk829@cornell.edu)
2022-05-20 14:19:18

Congrats, Sara. You are a rockstar 🙂

❤️ Sara Beery
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2022-05-20 14:22:37

Congrats, Dr. Beery!

❤️ Sara Beery
Jon Van Oast (jon@wildme.org)
2022-05-20 14:30:29

very exciting!! congratulations. 🎉

❤️ Sara Beery
Fagner Cunha (fagner.cunha@icomp.ufam.edu.br)
2022-05-20 14:49:45

Congrats, @Sara Beery! 🎉🎉🎉

❤️ Sara Beery
Jake Wall (walljcg@gmail.com)
2022-05-21 06:05:08

Great job Sara!!

❤️ Sara Beery
Kasirat (kasirat_turfi@hotmail.com)
2022-05-21 16:50:17

Congratulation @Sara Beery! Would love to watch the presentation if there is a link to the recording 😄

❤️ Sara Beery, Zac Winzurk
Rita Pucci (rita.pucci85@gmail.com)
2022-05-23 03:44:07

congratulation for you PhD!!

❤️ Sara Beery
Hannah Kerner (hkerner@umd.edu)
2022-05-23 11:50:04

New funding opportunity > Machine learning can help communities around the world understand, mitigate, and adapt to climate change. However, a lack of ground truth data limits the effective use of machine learning in low- and middle-income communities that are disproportionately impacted by climate change. Solutions created by and for those communities are crucial to address the climate crisis globally. > > To address this need, a group of philanthropies and data scientists have created Lacuna Fund. The Fund is the world’s first collaborative effort to provide data scientists, researchers, and social entrepreneurs in low- and middle-income contexts globally with the resources they need to produce training and evaluation datasets that address urgent problems in their communities. > > Lacuna Fund just launched its most recent call for proposals focused on machine learning datasets for equitable climate outcomes in climate & health. Proposals are due on 17 July. See the full announcement, with more details on eligibility and selection, on the Lacuna Fund Apply page.

Lacuna Fund
👍 Sara Beery, Jason Holmberg (Wild Me), Peter van Lunteren, Justin Kay, Suzanne Stathatos, Matt Weldy, Kakani Katija, Dan Morris, Lily Xu, Ivan Zvonkov, Rita Pucci, Merlin Bleile, Kasirat, Carly Batist, Caleb Robinson
Victor Anton (victor@wildlife.ai)
2022-05-24 23:50:25

Two post-doctoral opportunities for "AI in ecology" in Utrecht, Netherlands (PS I don't have any links with the institution or know more details about the positions, I just came across it) https://www.iamexpat.nl/career/jobs-netherlands/research-academic/two-postdoctoral-positions-ai-ecology-10-fte/391358

IamExpat
👍 Sara Beery, Lily Xu, Oisin Mac Aodha, Rita Pucci, Déva Sou, Stefan Schneider, Jason Holmberg (Wild Me)
😎 Carl Boettiger
Amir Patel (amir.patel@uct.ac.za)
2022-05-29 15:58:08

👋 Hi everyone! I'm new here 🙂 Looking forward to working and meeting you all!

👍 Peter van Lunteren, Lloyd Hughes
😍 Sara Beery
👋 Benjamin Kellenberger, Rita Pucci
Amir Patel (amir.patel@uct.ac.za)
2022-05-29 16:01:34

My lab is looking for a post-doc in multi-sensor data fusion for wildlife biomechanics. Please see advert attached! We are based in beautiful Cape Town, South Africa if you would like a change of scenery with lots of cool animals 😁 🐆🦁

🎉 Sara Beery, Oisin Mac Aodha
👍 Benjamin Kellenberger
Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2022-05-30 04:28:26

*Thread Reply:* Hey Amir - do you want us to pop this on WILDLABS as well, so the broader conservation tech community can see it?

Amir Patel (amir.patel@uct.ac.za)
2022-06-01 04:14:21

*Thread Reply:* @Stephanie O'Donnell yes, please! 🙂

👍 Stephanie O'Donnell
Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2022-06-01 05:00:24

*Thread Reply:* Does it have a close date?

Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2022-06-01 05:00:53

*Thread Reply:* I've put to close at 30June, but let me know if I need to change that! https://www.wildlabs.net/career-opportunity/post-doc-multi-sensor-fusion-animal-biomechanics

wildlabs.net
Amir Patel (amir.patel@uct.ac.za)
2022-06-02 08:48:20

*Thread Reply:* thank you so much Stephanie! 30 June is perfect 🙂

🙌 Stephanie O'Donnell
slackbot
2022-05-31 05:43:18

This message was deleted.

Devis Tuia (devis.tuia@epfl.ch)
2022-05-31 05:54:47

*Thread Reply:* Hey! I think @Holger Klinck and @gvanhorn are bird enthousiasts, maybe they can help you out! Otherwise, try the #jobs channel. Then another question is probably to know what you are looking for (a master, a phd, an engineer job?)

Déva Sou (soudeva974@gmail.com)
2022-05-31 07:17:11

*Thread Reply:* Hi Devis, thanks for the feedback!

Alayna Van Dervort (av@thebigwild.com)
2022-05-31 18:30:38

https://crcs.seas.harvard.edu/get-involved

crcs.seas.harvard.edu
❤️ Lily Xu, Sara Beery, Alexander Robillard, Dhruv Sheth
Alayna Van Dervort (av@thebigwild.com)
2022-05-31 18:30:43

The Harvard Center for Research on Computation and Society (CRCS) is currently accepting applications for postdoctoral fellows

Carly Batist (cbatist@gradcenter.cuny.edu)
2022-06-04 06:05:14

https://techcommunity.microsoft.com/t5/internet-of-things-blog/wildlife-monitoring-and-conservation-with-azure-percept/ba-p/3390910

TECHCOMMUNITY.MICROSOFT.COM
👍 Justin Kay, Alexander Robillard, Ed Miller
👀 Alexander Robillard, Rita Pucci, Crystal Huang
Justin Kay (justinkay92@gmail.com)
2022-06-04 12:48:13

*Thread Reply:* If anyone here has experience with these I'd love to hear how they stack up to Jetson / Coral options for edge

Ed Miller (ed@hypraptive.com)
2022-06-05 19:19:15

*Thread Reply:* For ease of use, the Percept is pretty good. I haven't had a chance to compare performance across these platforms. @Henrik Cox (Sentinel) & @Sam Kelly, did you do any performance or efficiency comparisons across Jetson, Coral, etc.?

Henrik Cox (Sentinel) (henrik@conservationxlabs.org)
2022-06-06 16:56:17

*Thread Reply:* We tried out the Jetson a while ago and have settled on the Coral since. Still using to this day

Peter Griggs (peter@deepai.org)
2022-06-06 13:01:13

Hi All! I'm looking for individuals or teams that are interested in trying out our computer vision platform for detecting species in camera trap imagery. Anyone can use it, and it can be quickly taught to find new species in your camera trap imagery after seeing only dozens of images.

Our goal is to reduce the time teams spend reviewing camera trap imagery by 80-90%

There's a free community version we host that anyone can start training on their own camera trap images here: https://deepai.org/zendo

I'd be happy to chat with anyone interested as well!

❤️ Sara Beery, Alexander Robillard
👍 Talia Speaker, Carly Batist, Alexander Robillard, Dan Morris
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-06-06 14:29:41

Just out of curiosity, what’s the difference between this and other software ou there for camera trap image processing, (Wildlife Insights, Zamba, MegaDetector, Trap Tagger, Conservation AI, MLWIC2, FASTCAT, etc? Would be awesome to have a comparisons sheet of all the camera trap image processing software since I feel like there are a lot popping up recently! There is this one guide but it doesn’t have all the software options available (just 3 I believe) - https://ai-camtraps.netlify.app/index.html. I don’t mean to sound negative about a new software, it’s obviously great that there are different options; just that it would be good to have pro’s/con’s of each for people to see which is best for their use case! 🙂

👍 Talia Speaker, Peter Griggs, Alexander Robillard
Talia Speaker (talia.speaker@wildlabs.net)
2022-06-06 14:44:48

*Thread Reply:* This camera trap solution comparison doc by @Petar Gyurov is a great start on that Carly! https://www.notion.so/Camera-Trap-Pipeline-Solution-Comparison-2eac80825c4941b0b2b5fad3daea1cc3

Petar's Notion on Notion
👍 Carly Batist, Stephanie O'Donnell
😊 Petar Gyurov
Talia Speaker (talia.speaker@wildlabs.net)
2022-06-06 14:48:26

*Thread Reply:* Also Dan's helpful running resource for overviews of solutions: https://agentmorris.github.io/camera-trap-ml-survey/

Camera Trap ML Survey
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-06-06 14:51:02

*Thread Reply:* Oh awesome hadn’t heard of these yet, thanks for sharing! Will def add these to the next update of the Conservation Tech Directory. I think the only ones I can think of that aren’t on Peter’s list are TrapTagger, FASTCAT cloud, ConservationAI, and obviously this new one Peter mentioned (Zendo).

🙌 Talia Speaker, Peter Griggs, Rita Pucci
Peter Griggs (peter@deepai.org)
2022-06-06 15:22:17

*Thread Reply:* Oh these are great lists! Thanks for sharing.

The main difference between our platform, Zendo, and other options is that you can train it on your own camera imagery really quickly, and get a highly accurate model fit to your dataset and species of interest.

That model you've trained is hosted behind the scenes, and accessible for use at scale within the platform, or via an API.

You can then collaborate on and review new images within the platform, or export everything to a CSV file.

👍 Talia Speaker, Carly Batist
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-06-08 03:57:46

https://www.straitstimes.com/singapore/environment/new-app-launched-to-help-in-fight-against-illegal-trade-of-shark-and-ray-fins

The Straits Times
👏 Rita Pucci, Jason Holmberg (Wild Me), Alexander Robillard
🎉 Jon Van Oast, Dan Morris, Alexander Robillard
Thiago Bicudo (bicudotks@gmail.com)
2022-06-11 13:12:05

https://twitter.com/mclduk/status/1535166475518058497?t=RtcNBPuUwNCyjTMPD4XjRw&s=08|https://twitter.com/mclduk/status/1535166475518058497?t=RtcNBPuUwNCyjTMPD4XjRw&s=08

Looking for a postdoc opportunity using AI, sound and image recognition for nature conservation? We're about to advertise 2 x 2-year postdocs here in the beautiful Dutch city of Leiden at @Naturalis_Sci. Feel free to message me/email me if you want to know more

twitter
} Dan Stowell (https://twitter.com/mclduk/status/1535166475518058497)
❤️ Thiago Bicudo, Alexander Robillard, Sara Beery, Lily Xu, Rita Pucci, Subhransu Maji
👍 Carly Batist, Alexander Robillard, Eelke
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-06-11 14:50:28

*Thread Reply:* ha - great minds think alike, I posted it in the jobs channel today too😂

Thiago Bicudo (bicudotks@gmail.com)
2022-06-11 14:55:41

*Thread Reply:* Thanks Carly, I'm new here and didn't realize that there is a job channel. Thank you so much 😘

😲 Rita Pucci
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-06-11 14:56:45

*Thread Reply:* Yes definitely check out the other channels! The ‘jobs’, ‘upcomingevents’ and ‘newpapers’ are particularly helpful for me in staying up to date with what’s going on in the field 🙂

❤️ Thiago Bicudo
Devis Tuia (devis.tuia@epfl.ch)
2022-06-12 13:32:01

*Thread Reply:* I wish! I always enjoyed visiting Naturalis during the Dutch part of my life.. say hi to Rutger Vos, if he is still there!

Rita Pucci (rita.pucci85@gmail.com)
2022-06-14 03:21:52

*Thread Reply:* @Thiago Bicudo, which job channel ??

Devis Tuia (devis.tuia@epfl.ch)
2022-06-14 04:49:06

*Thread Reply:* @Rita Pucci, here: #jobs

Rita Pucci (rita.pucci85@gmail.com)
2022-06-14 04:49:49

*Thread Reply:* ah, thanks, it didn't pop in the channels list before!

👍 Devis Tuia
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-06-14 06:03:08

*Thread Reply:* not all of them necessarily appear at first - when I first joined I had the same issue! I had to go to the “+” button next to channels --> ‘browse channels’ and then I saw a bunch of other ones I joined

👍 Rita Pucci
Devis Tuia (devis.tuia@epfl.ch)
2022-06-15 02:50:06

*Thread Reply:* exactly… it’s a discovery process 😛

😂 Carly Batist, Sara Beery, Rita Pucci
Patrizia Paci (pp4649@open.ac.uk)
2022-06-15 11:18:26

Hi everyone, I am new in the channel and just wanted to greet all of you! My name is Patrizia and I am an associate lecturer in Evolutionary Biology and researcher in Animal-Computer Interaction at the Open University in the UK. I developed an interest in Machine Learning. Thus, I very recently visited the Machine Vision group at TU Delft in the Netherlands to work on training algorithms and annotations to recognise animal activities on videos. I am interested in ML applications to bioacoustics too. I would like to continue my research in ML for animal activity recognition, integrating behavioural ecology and ML. Happy to be part of your community and share with you knowledge, insight, challenges, etc. Special thanks to Silvia Zuffi who invited me to this channel :)

🎉 Stefan Schneider, Sara Beery, Oisin Mac Aodha, Alexander Robillard, Stephanie O'Donnell, Bilgenur Baloglu, Rita Pucci
👋 Benjamin Kellenberger, Jon Van Oast, Declan, Omiros Pantazis
Subhransu Maji (smaji@cs.umass.edu)
2022-06-15 11:23:55

*Thread Reply:* Welcome! I’m in Delft too on a sabbatical!

Patrizia Paci (pp4649@open.ac.uk)
2022-06-15 12:25:48

*Thread Reply:* Hello Subhransu, I loved Delft!

👍 Subhransu Maji
Bilgenur Baloglu (bilgenurb@gmail.com)
2022-06-15 13:10:30

Hi everyone! I am also new in the channel and really wanted to join after hearing about it at Sara Beery's talk. This is Bilgenur, I am a lecturer at USC (teaching machine learning and genomic data analysis) and a bioinformatics scientist at Thermo Fisher. I am interested in starting my company for eDNA based monitoring, focusing on water (freshwater and ocean). My PhD work was on biomonitoring of Singapore's aquatic ecosystems using DNA sequencing technologies and my postdoc work took me all the way to sub-arctic Canada! Really happy to join you all, and look forward to future conversations and collaborations.

👋 Sara Beery, Declan, Omiros Pantazis, Déva Sou, Patrizia Paci, Rita Pucci
✌️ Burak Ekim
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-06-15 13:31:23

*Thread Reply:* Assuming you know about NatureMetrics? If you wanted to do eDNA based aquatic monitoring they’re doing that already!

Bilgenur Baloglu (bilgenurb@gmail.com)
2022-06-15 13:41:18

*Thread Reply:* Yes! Very aware of them since the beginning of NatureMetrics :) The niche market or the customer base I am thinking about is a little different than theirs.

👍 Carly Batist
Ando Shah (ando@berkeley.edu)
2022-06-15 23:43:01

*Thread Reply:* Hi Bilgenur! Welcome! A project Im working on involves using eDNA within a very large MPA to assess biodiversity, and working with NatureMetrics is problematic for a few reasons. We would welcome new players in this field, and in general our team would be happy to chat more! If this is of interest, please DM me and we can chat

❤️ Bilgenur Baloglu, Sara Beery
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-06-16 07:04:07

*Thread Reply:* I’d also recommend posting to the WILDLABS eDNA group! Lots of people working on really cool stuff there. https://wildlabs.net/groups/edna-genomics

wildlabs.net
👍 Bilgenur Baloglu
Bilgenur Baloglu (bilgenurb@gmail.com)
2022-06-16 14:44:22

*Thread Reply:* Hi Ando! Thank you so much for your response. I would be happy to hear more about your project!

Bilgenur Baloglu (bilgenurb@gmail.com)
2022-06-16 14:46:15

*Thread Reply:* Will do, thanks for bringing this group to my attention, Carly!

👍 Carly Batist
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-06-23 19:08:07

*Thread Reply:* Just came across this and figured it may of interest as well - https://www.ednacollab.org/ (they also have a grants program)

❤️ Bilgenur Baloglu
Lily Xu (lily_xu@g.harvard.edu)
2022-06-17 01:20:13

EAAMO Doctoral Consortium applications out and due July 8! A fantastic second-year conference growing out of a wonderful and supportive research community (Mechanism Design for Social Good)

EAAMO is October 6–9 in Washington DC

> The second ACM conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO'22) aims to highlight work where techniques from algorithms, optimization, and mechanism design, along with insights from the social sciences and humanistic studies, can help improve equity and access to opportunity for historically disadvantaged and underserved communities. https://eaamo.org/doctoral_consortium/

👍 Ayan Mukhopadhyay
Thijs (thijs@q42.nl)
2022-06-17 05:53:24

Wednesday was an exciting (and a bit terrifying) day. Sorry, I just feel the need to share this with this group of beautiful people 🤩

I gave a TEDx talk! It's about how I've been using my tech skills for good / conservation.

My title is: From code to conservation: a nerd's search for meaning

Hopefully the recording will come online in a couple of weeks so I can also share the video here (if that's okay) 🙏

🙌 Stephanie O'Donnell, Ștefan Istrate, Carly Batist, Catherine Villeneuve, Omiros Pantazis, Yihang She, Lily Xu, Kakani Katija, Declan, Bilgenur Baloglu, Suzanne Stathatos, Mark Jordan, Caleb Robinson, nyakundi lamech, Déva Sou, Catherine Wang, Rita Pucci, Crystal Huang
💕 Jon Van Oast
👏 Rita Pucci
👍 Jan Kees
Thijs (thijs@q42.nl)
2022-06-17 06:04:03

*Thread Reply:* I talk about the effect of climate change on the African rainforest. Especially the impact it has on forest elephants and why that is causing more Human-Elephant-Conflicts. Obviously not in much detail, because I only had 14 minutes to talk 🙂

I can tell you that this was by far one of the coolest but also scariest talks I've given so far. 😇

💚 Lily Xu, Caleb Robinson, Rita Pucci, Déva Sou
Caleb Robinson (calebrob6@gmail.com)
2022-06-18 09:50:40

*Thread Reply:* Congrats @Thijs!! I can't wait to watch!

Devis Tuia (devis.tuia@epfl.ch)
2022-06-18 20:10:48

*Thread Reply:* congrats @Thijs! I gave one last year in Martigny and I agree 100%: super scary but also very rewarding

👍 Déva Sou
Thijs (thijs@q42.nl)
2022-06-19 07:46:01

*Thread Reply:* Thanks Caleb!

Thijs (thijs@q42.nl)
2022-06-19 07:46:28

*Thread Reply:* Cool @Devis Tuia I'll try to look up your talk on YouTube 👍

Thijs (thijs@q42.nl)
2022-06-19 07:48:33

*Thread Reply:* Ah it's in french, I don't understand that 😇

Devis Tuia (devis.tuia@epfl.ch)
2022-06-20 11:33:51

*Thread Reply:* 😄

Colin Donihue (colindonihue@gmail.com)
2022-07-19 10:00:36

*Thread Reply:* Hey @Thijs! I’d love to see your talk. I looked for it online but couldn’t find it, do you know if it’s been posted? Thanks!

Thijs (thijs@q42.nl)
2022-08-01 05:25:50

*Thread Reply:* @Colin Donihue The talk is not online yet, this usually takes 4-8 weeks for editing and reviewing by TED. Once it's online I'll post it here 👍

👍 Colin Donihue
Devis Tuia (devis.tuia@epfl.ch)
2022-06-18 20:11:53

In case you are around at CVPR2022, please let me know! It would be super cool to have a AIforConservation meetup (for example for lunch once)! you can find me for sure tomorrow (Sunday) at the Earthvision workshop: https://www.grss-ieee.org/events/earthvision-2022/

GRSS-IEEE
Est. reading time
14 minutes
🙌 Hannah Kerner, Sara Beery, Oisin Mac Aodha, Kakani Katija, Sophia Abraham, Stephanie O'Donnell, Rita Pucci, Helena Russello, Suzanne Stathatos, Kasirat, Dhruv Sheth
Devis Tuia (devis.tuia@epfl.ch)
2022-06-18 20:12:49

*Thread Reply:* (room 219, from 8:30 on)

Frederic (frederic@apic.ai)
2022-06-19 20:35:02

*Thread Reply:* Hi @Devis Tuia i am at CVPR, too. As probably a few others from this slack channel.

A few are probably attending the CV4Animals workshop tomorrow. How about we get something for dinner afterwards with anyone who is interested?

❤️ Sophia Abraham, Helena Russello
Devis Tuia (devis.tuia@epfl.ch)
2022-06-20 11:33:38

*Thread Reply:* Cool! Let’s try lunch first at the cv4animals break? Like we can meet at the end of the session just outside of the cv4animals room? 11h45?

Daniel Davila (daniel.davila@kitware.com)
2022-06-20 12:18:43

*Thread Reply:* My colleague, Matt Dawkins, will be giving a talk at the CV4Animals workshop in a little while here. We have a lot of team members from Kitware milling about the conf and are at booth #1522 in the exhibition, if anyone wants to meet up! Have a good week yall.

Devis Tuia (devis.tuia@epfl.ch)
2022-06-20 16:05:12

*Thread Reply:* @Frederic let’s discuss at the end of the sessions of cv4animals. I am in the room btw 😉

Frederic (frederic@apic.ai)
2022-06-20 16:07:22

*Thread Reply:* sorry did not see your message. Lets meet afterwards, to discuss it.

👍 Devis Tuia
Rita Pucci (rita.pucci85@gmail.com)
2022-06-21 03:55:39

Yesterday's Workshop was fantastic!! 😄 I met virtually lots of people, thanks for visiting my poster!

🙌 Silvia Zuffi, Sara Beery, Jason Holmberg (Wild Me), Alexander Robillard, Frederic, Helena Russello, Sophia Abraham
🎉 Jon Van Oast, Sophia Abraham
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-06-21 20:26:24

New paper on transfer learning with passive acoustic data out, led by@Emmanuel Dufourq! Happy to have played a small co-author part but he deserves the recognition for all the programming heavy lifting! https://www.sciencedirect.com/science/article/pii/S1574954122001388

sciencedirect.com
🎉 Jon Van Oast, Declan, Lily Xu, Suzanne Stathatos, Dan Morris, Marcus Lapeyrolerie, Juan Sebastián Cañas Silva
👏 Rita Pucci
👀 Alexander Robillard
Sara Beery (sbeery@caltech.edu)
2022-06-22 16:18:20

Auto Arborist has been released!! This fine-grained multiview dataset contains over 2 million trees belonging to over 300 genus-level categories in 23 cities across the US and Canada built to foster the development of robust methods for large-scale urban forest monitoring

https://twitter.com/sarameghanbeery/status/1539703332100521984

twitter
} Sara Beery (https://twitter.com/sarameghanbeery/status/1539703332100521984)
twitter
} Google AI (https://twitter.com/GoogleAI/status/1539680204594982912)
🎉 Carly Batist, Justin Kay, Declan, gvanhorn, Nikhil Vytla (he/him), Arjun Subramonian (they/them), Stefan Schneider, Daniel Grzenda, Ando Shah, Sachith Seneviratne, Suzanne Stathatos, Yihang She, Emilio Luz-Ricca, Alan Papalia, Vivek Mishra, Chris Yeh
😎 Jon Van Oast, Frederic, Ben Weinstein
🌳 Stefan Schneider, Daniel Grzenda, Frederic, Dan Morris, Oisin Mac Aodha, Catherine Villeneuve, Alan Papalia
🌴 Stefan Schneider, Daniel Grzenda, Helena Russello, Catherine Villeneuve, Alan Papalia, Riccardo de Lutio
🌲 Stefan Schneider, Daniel Grzenda, Frederic, Robin Zbinden, Catherine Villeneuve, Alan Papalia
🙌 Carl Boettiger
Lily Xu (lily_xu@g.harvard.edu)
2022-06-23 11:14:12

EarthRanger's Conservation Technology Award is providing two grants — $15K each — to organizations that are developing tech for wildlife. Applications due August 31 https://www.earthranger.com/conservation-tech-award

@Sara Beery @Jake Wall @Tanya Birch @Stephanie O'Donnell @Jes Lefcourt

👀 Stephanie O'Donnell, Jason Holmberg (Wild Me), Declan, Alexander Robillard
🎉 Carly Batist, Fadel
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-06-23 15:12:02

*Thread Reply:* Specific departments or areas? Some of those are very large & multi-faceted so I would have a better idea if I know someone with a bit more context/detail

Thijs (thijs@q42.nl)
2022-06-23 15:16:46

*Thread Reply:* Yeah, I do know some people at some of these organisations. But it's quite a long list. Maybe you can elaborate a bit on your intentions?

aruna (arunas@mit.edu)
2022-06-23 15:17:13

*Thread Reply:* Thanks @Carly Batist and @Thijs. I am looking for someone dealing with Facebook advertisements at these orgs.

Carly Batist (cbatist@gradcenter.cuny.edu)
2022-06-23 15:17:48

*Thread Reply:* so someone in the marketing or comms departments then?

Thijs (thijs@q42.nl)
2022-06-23 15:18:07

*Thread Reply:* And lets assume you find people at these departments, what's your question?

aruna (arunas@mit.edu)
2022-06-23 15:18:09

*Thread Reply:* That sounds perfect! 🙂

aruna (arunas@mit.edu)
2022-06-23 15:18:22

*Thread Reply:* I am interested in talking to them about how they advertise on Facebook. 🙂

Carly Batist (cbatist@gradcenter.cuny.edu)
2022-06-23 15:41:39

*Thread Reply:* Do you work for an org that is trying to do this? Just curious how you came up with this list

aruna (arunas@mit.edu)
2022-06-23 15:45:04

*Thread Reply:* I am a PhD student at MIT, looking at how environmental orgs communicate about CC.

aruna (arunas@mit.edu)
2022-06-23 15:47:11

*Thread Reply:* This list is from my initial survey of adv orgs. Happy to talk to other adv orgs who advertise/post on FB too!

Sara Beery (sbeery@caltech.edu)
2022-06-25 06:26:00

NVIDIA hardware grant program is back: https://mynvidia.force.com/HardwareGrant/s/Application

❤️ Sophia Abraham, Suzanne Stathatos, Jan Kees, Jason Parham, Carly Batist, Frederic, Carl Boettiger, David
🎉 Jon Van Oast
Ankur Kalra (ankur@hoplabs.com)
2022-06-27 14:25:56

Hi folks! New to the Slack, but met @Sara Beery at CVPR 22 and learned of this community. Excited to get to know folks and their projects! I'm at Hop Labs, which is an ML/CV consulting firm -- we do a lot of work for corporate clients, but we have a pro-bono program for non-profit research teams: https://www.hoplabs.com/pro-bono-work. Please do not hesitate to DM or email if your project could use some free help. Thanks!

❤️ Sara Beery, Stephanie O'Donnell, Jason Holmberg (Wild Me), Suzanne Stathatos, Carly Batist, Talia Speaker, Lily Xu
👋 gvanhorn, Jason Holmberg (Wild Me), Jon Van Oast, Dan Morris, David
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-06-30 12:04:41

Climate Change AI Summer School - call for mentors Description: https://docs.google.com/document/d/1C1bfCesbAqMNmfEJzn7QETv2pNYjks3ZQl9CUqXO3o0/edit Application form: https://docs.google.com/forms/d/1y3DoURFesd8k1NQN1mZSOqYgmbJxwpN4syIFfS_bpDs/viewform?edit_requested=true

👍 Sara Beery, Daniel Spokoyny, Lily Xu, Emily Lines
:thumbsup_all: Frederic Fol Leymarie
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-06-30 12:06:27

*Thread Reply:* disregard my emails highlighting…(just noticed that🤦‍♀️

Carly Batist (cbatist@gradcenter.cuny.edu)
2022-07-06 11:48:24

🗣️🔊🎶 For those interested in AI for bioacoustics/ecoacoustics applications 🗣️ 🔊🎶

The Bioacoustics Stack Exchange has reached the Beta stage and is LIVE! It is also in a critical stage, and we need your help to keep it alive. We have only 2 weeks to prove we have the community to support the site so please help as you can! If you want more information on how to help or get started with Stack Exchange, check out this helpful Youtube tutorial made by Selene Fregosi.

There have been LOTS of questions on AI/ML for bioacoustics recently so would be great to have input/users from the tech side contributing as well!

How can you help? • JOIN: https://bioacoustics.stackexchange.com/VISIT: Make it a habit to check in on the site regularly. We need >>500 visits per day!VOTE: We need to show participation! Vote on all good Q/A • ASK: Have a new bioacoustics-related question? What about your older questions? You can even ask/answer your own question! These questions will be archived for future use, so ALL good questions are welcome! We need >> 10 questions per day!ANSWER: Have an answer to one of the questions, or additional information? Share your expertise where you can! Is there a question you are often asked? You can pair up with someone to ask/answer it again to be archived! If someone asks you a question-- tell them you will answer it on the Stack Exchange! We need ~2.5 answers per question.INVITE & SHARE: Invite others to join! Share on social media, with your collaborators, etc.!

bioacoustics.stackexchange.com
❤️ Sara Beery, Declan, Alexander Robillard, Lily Xu, Jan Kees
😎 Jon Van Oast
🔊 Stefan Schneider, Elijah Cole (Deactivated)
🙌 Anton Alvarez
Yves Bas (yves.bas@gmail.com)
2022-07-08 09:33:11

Hello, a TREE paper on new techs and insect monitoring with an important focus on AI, computer vision and acoustics: https://www.sciencedirect.com/science/article/pii/S0169534722001343

sciencedirect.com
🔊 Oisin Mac Aodha, Alan Papalia, Suzanne Stathatos, Jason Holmberg (Wild Me), Sara Beery, Belen Saavedra
🐜 Carly Batist, Stefan Schneider, Stephanie O'Donnell, Jason Holmberg (Wild Me), Déva Sou, Tarun
👍 Michael Bunsen
Sara Beery (sbeery@caltech.edu)
2022-07-11 20:17:31

NSF funding opportunity: https://beta.nsf.gov/funding/opportunities/partnership-advance-conservation-science-and-practice-pacsp.

Beta site for NSF - National Science Foundation
💯 Carly Batist, Jason Holmberg (Wild Me), Subhransu Maji, Stephanie O'Donnell, Ben Weinstein, Olivier Gimenez
George Darrah (george.darrah@systemiq.earth)
2022-07-14 10:39:58

Hey everyone - recently posted my take on the role of biodiversity monitoring tech in scaling up private sector investment into nature... would be super interested in expert feedback! thanks 🙏 https://www.linkedin.com/posts/georgedarrahwe-need-to-bring-biodiversity-to-the-bloomberg-activity-6950070143146209280-e59v?utmsource=linkedinshare&utmmedium=memberdesktopweb|https://www.linkedin.com/posts/georgedarrahwe-need-to-bring-biodiversity-to-the-bloo[…]e59v?utmsource=linkedinshare&utmmedium=memberdesktopweb

linkedin.com
👍 Stephanie O'Donnell, Sara Beery, Suzanne Stathatos, Alexander Robillard, Talia Speaker, Carly Batist, nyakundi lamech
😎 Jon Van Oast, Alexander Robillard
:first_place_medal: Jan Kees
Jan Kees (jankees.schakel@sensingclues.org)
2022-07-15 02:27:57

thx @George Darrah, very interesting read!

Frederic Fol Leymarie (ffl@dynaikon.com)
2022-07-15 07:26:47

Monitoring: yes; protecting: YES; rewilding: even BETTER (leaving these to the private sector .. is not enough).

George Darrah (george.darrah@systemiq.earth)
2022-07-18 09:55:25

*Thread Reply:* Agreed - but my sense is that without private sector funding and significant change to taxation we will not turn the tide on biodiversity losses

Frederic Fol Leymarie (ffl@dynaikon.com)
2022-07-18 12:44:16

*Thread Reply:* So lobbying both industry and government is needed, and keep raising awareness in the public and even in the academic spheres.

nyakundi lamech (lamechongondi88@gmail.com)
2022-07-15 15:28:08

Hi Everyone, my name is Lamech, and I am relatively new to this forum. I recently completed my undergraduate in statistics and have been since working on a startup I helped co-found. Our pioneer tool, E-Savior uses computer vision algorithms to reduce human-animal conflict through monitoring animal movements. We are currently using it to monitor elephants and hope to scale it up to other species in the future. My broad research interests are computational science in conservation. Next week I will be attending the APAC conference in Kigali, Rwanda, and would be happy to meet folks from this forum for a chat to learn more about all the exciting projects that you are all working on. I am also open to exciting roles in this field, should you have any! I can’t wait to meet those who will be attending APAC. Please email me on lamechongondi88@students.uonbi.ac.ke

🐘 Jason Parham, Subhransu Maji, Fadel, Sara Beery, Malte Pedersen, nyakundi lamech, Alexander Robillard
👍 Benjamin Kellenberger, nyakundi lamech, Alexander Robillard
Sara Beery (sbeery@caltech.edu)
2022-07-18 13:01:17

Hey everyone! Bit of personal news from your friendly neighborhood AI for Conservation founder - I've accepted a faculty position at MIT, starting fall 2023! I'll spend the next year at Google working on multimodal/multiview urban forest monitoring (see Auto Arborist), but I'll be hiring PhD students and a PostDoc this fall to start with me next year! If you're looking for an academic position at the intersection of CV/ML, biodiversity, and the environment please reach out!! I'm looking to hire students from diverse backgrounds to work collaboratively on interdisciplinary research projects.

twitter
} Sara Beery (https://twitter.com/sarameghanbeery/status/1548383002496929792)
Google AI Blog
🎉 Avi Sundaresan, Peter Bull, Jason Holmberg (Wild Me), Talia Speaker, aruna, Mark Goldwater, Declan, Daniel Grzenda, Suhail Alnahari, Suzanne Stathatos, Jon Van Oast, Alan Papalia, Daniel Spokoyny, Fagner Cunha, Ted Schmitt, Lucia Gordon, Justin Kay, Felipe Parodi, Carly Batist, Amrita Gupta, Lily Xu, Leonardo Viotti, Alexander Robillard, Matt Weldy, Lukas Picek, Elijah Cole (Deactivated), Bilgenur Baloglu, Lauren Gillespie, Dan Morris, Belen Saavedra, Stefan Schneider, Mitch Fennell, Blair Costelloe, Hemal Naik, Liyuan Zhu, Riccardo de Lutio, Yihang She, Diego Marcos, nyakundi lamech, Agnethe Seim Olsen, Cameron Trotter, Colin Donihue, Déva Sou, Oliver Broadrick, Dongmin (Dennis) Kim, Ando Shah, Timm Haucke, Alex Borowicz, Georgia Atkinson, Dhruv Sheth
👏 Malte Pedersen, Peter Bull, Jason Holmberg (Wild Me), Oisin Mac Aodha, aruna, Mark Goldwater, Daniel Grzenda, Jon Van Oast, Alan Papalia, Subhransu Maji, Emilio Luz-Ricca, Lily Xu, Alexander Robillard, Lukas Picek, Lauren Gillespie, Belen Saavedra, Stefan Schneider, Kalyan Nadimpalli, Liyuan Zhu, Omiros Pantazis, nyakundi lamech, Atriya Sen, Thijs, Dongmin (Dennis) Kim, Timm Haucke, Dhruv Sheth, Chris Yeh, Anton Alvarez, Lloyd Hughes, Andrew Schulz, David
❤️ Suzanne Stathatos, Ben Weinstein, Lily Xu, Alexander Robillard, Lukas Picek, Eddie Zhang, Belen Saavedra, Stefan Schneider, Mark Goldwater, Liyuan Zhu, nyakundi lamech, Dongmin (Dennis) Kim, Dhruv Sheth, Andrew Schulz, David
🙌 Sean P. Rogers, Dhruv Sheth, Andrew Schulz
Peter Bull (peter@drivendata.org)
2022-07-18 13:02:49

*Thread Reply:* Amazing, congrats!! 🎊

❤️ Sara Beery
nyakundi lamech (lamechongondi88@gmail.com)
2022-07-19 07:10:13

*Thread Reply:* Great !👏

❤️ Sara Beery
Déva Sou (soudeva974@gmail.com)
2022-07-19 12:30:14

*Thread Reply:* Congratulations!

❤️ Sara Beery
Oliver Broadrick (obroadrick@gwmail.gwu.edu)
2022-07-19 19:48:48

Hi all, I'm an MS student studying CV at GW, advised by Prof Robert Pless. I love nature and conservation (fly fishing, hiking...), and I was recently introduced to @Sara Beery’s amazing work! That led me to finding this slack... so, hi!

Among other things, I'm currently playing with pictures of trout. If you have trout pictures, send them my way! 🙂

🐟 Sara Beery, Suzanne Stathatos, Kakani Katija, Jason Holmberg (Wild Me), Declan, Déva Sou, Alexander Robillard, Belen Saavedra
👋 gvanhorn
Suzanne Stathatos (suzanne.stathatos@gmail.com)
2022-07-19 19:50:00

*Thread Reply:* feel free to join the #marine channel all about CV particularly for marine things!

Oliver Broadrick (obroadrick@gwmail.gwu.edu)
2022-07-19 19:52:26

*Thread Reply:* Joined, thanks!

Sean P. Rogers (sean.rogers@uvm.edu)
2022-07-20 11:17:33

Hi Everybody, @Dan Morris introduced me to this great community. I'm a PhD student in Complex Systems and Data Science at the University of Vermont studying applications of ML and NLP to identifying wildlife exploitation on social media. I hold a general interest in conservation technology and environmental security issues. I'm looking forward to learning with and interacting with you all. 🐢

👋 gvanhorn, Declan, Sara Beery, Dan Morris, Yihang She, Oliver Broadrick, Carly Batist, Suzanne Stathatos, Alex Borowicz, Belen Saavedra, Benjamin Kellenberger
Ryan Feng (ryan.feng16@gmail.com)
2022-07-20 12:36:45

Hi all, just learned about this from coming across Sara’s website! I'm a robotics M.S. student at UMich. I know a lot of y'all will be from more CV/DS backgrounds, but conservation is something I feel pretty strongly about and I’m looking forward to learning more about where robotics can play a part!

🤖 Sara Beery, Declan, Carly Batist, Oliver Broadrick, Suzanne Stathatos, Jason Holmberg (Wild Me), Alan Papalia, nyakundi lamech
👋 Benjamin Kellenberger, Yihang She, Jaanak
Muskan Sachdeva (muskan@meta-lynx.com)
2022-07-21 07:57:01

Hi Everyone, I am an MBA student working with Metalynx a data curation start-up based out of London. Our platform helps users visualise and curate image data sets to uncover biases and corrupted data, and efficiently choose which images to label. I found this slack channel following @Sarah Zimmer Beery’s amazing work in conservation. Additionally @Dan Morris’s github page and @Petar Gyurov’s notion page have immensely helpful for me to understand AI in conservation landscape.

I would be extremely grateful if could have a short 15-20 min chat with anyone working with computer vision to better understand the workflow while using CV in conservation and issues encountered in development using large image data sets.

👋 Sara Beery, Oliver Broadrick, Jason Holmberg (Wild Me), nyakundi lamech, Jaanak
Jaagat P. (jaagatp05@gmail.com)
2022-07-22 14:00:30

Hi All! I hope this finds you well. My name is Jaagat and I am a rising junior in high school (probably didn't expect to see one here 😅) who's truly passionate about leveraging AI for social good/conservation and would be more than grateful to connect with you all. While I still have a lot to learn, I can't wait to see the potential that AI, robotics, and ML hold for our future! It's a true privilege to join such a community!

👍 Jason Holmberg (Wild Me), Justin Kay
👋 Declan, Jon Van Oast, Belen Saavedra, Oliver Broadrick, Sara Beery, Andy Viet Huynh, Carly Batist, nyakundi lamech, Eddie Zhang, Carl Boettiger, Jaanak
👶 Ryan Feng
🚀 Carl Boettiger
Marconi Campos (marconi@rfcx.org)
2022-07-22 16:26:42

Hi everyone, I am new here on the channel and just wanted to greet you all! My name is Marconi Campos; Im a tropical ecologist committed to improving wildlife detection, monitoring, and conservation. Im currently the chief scientist in Rainforest Connection (https://rfcx.org), where we have been working hard combining acoustic monitoring, AI, and occupancy models to understand how fauna is responding to both natural and human disturbances all around the world. I`m interested in ML applications to bioacoustics but also eager to learn and combine different approaches to improve biodiversity monitoring. Happy to be here and excited to learn a lot and share some ideas ; )

🌿 Suzanne Stathatos, Oliver Broadrick, Sara Beery, Declan, Andy Viet Huynh, Carly Batist, Jaagat P., nyakundi lamech, Jaanak, Julia Marisa Sekula
😎 Jon Van Oast
🎉 Dan Morris, Justin Kay, Eddie Zhang
👋 Carl Boettiger
Jaanak (jaanak007@gmail.com)
2022-07-22 20:56:37

Hi all! My name is Jaanak Prashar, and I am a rising junior in high school (definitely not Jaagat's fraternal twin). I hold a strong passion for computational biology, molecular genetics, and combining artificial intelligence with genetics and health equity. I also have a passion for philosophy, particularly the philosophy of the RNA world in genetics! It is a great pleasure and a privilege to be here, and I am excited to learn a lot!

😁 Sara Beery, Justin Kay, nyakundi lamech, Suzanne Stathatos
👬 Sara Beery, Jaagat P., Suzanne Stathatos, Jaanak
👋 Peter van Lunteren, Mark Goldwater, Eddie Zhang, Déva Sou
🚀 Carl Boettiger
Jaanak (jaanak007@gmail.com)
2022-07-25 14:54:32

Hi all,

I hope this message finds you well. My twin brother Jaagat and I are currently looking for any research opportunities in the fields of machine learning and data science and/or their practical applications (i.e. genomics, ecology, etc.). We would be grateful for any research opportunities available (unpaid) either during the summer, throughout the year, or even next summer. We both would also be more than happy to share our resumes as well as past research I have conducted and presented at a conference. Have a lovely day!

Best,

The twins 👬

Carly Batist (cbatist@gradcenter.cuny.edu)
2022-07-25 14:56:12

*Thread Reply:* Where are you based geographically now? That would help with identifying projects

Jaanak (jaanak007@gmail.com)
2022-07-25 14:57:18

*Thread Reply:* We are both located in Texas, and we are currently seeking any remote opportunities!

Mark Goldwater (mgoldwater@whoi.edu)
2022-07-25 15:00:21

*Thread Reply:* Try your best to not settle for unpaid internships. You deserve money for your efforts!

💯 Declan, Ștefan Istrate, Alexander Robillard, Andy Viet Huynh, Anton Alvarez, Fridah Nyakundi
👍 Carly Batist, Sara Beery, Yves Bas, Elijah Cole (Deactivated), Andy Viet Huynh, Omiros Pantazis, Jaanak, Jaagat P.
Jaanak (jaanak007@gmail.com)
2022-07-25 15:01:07

*Thread Reply:* We will be happy to just gain exposure and learn at this point of time, but maybe in the future! 🙂

Carly Batist (cbatist@gradcenter.cuny.edu)
2022-07-25 15:02:30

*Thread Reply:* If you live in a town with a university, see if there are any labs or professors working in the AI for good space and reach out. Many professors I know have high school students working in their labs or work with high school programs so it’s worth a shot to ask! You might also check out the Conservation Tech Directory to scope out possible organizations/companies/projects to reach out to.

👍 Jaanak, Jaagat P.
Daniel Davila (daniel.davila@kitware.com)
2022-07-25 20:05:28

*Thread Reply:* I have to echo Mark, it's actually more and more common for companies to take on high school talent. If you are contributing value to the enterprise, you deserve compensation!

That being said, what part of texas? There are a ton of great universities out there, the UT system for example, with a campus in every sector. There is a very large research institute in san antonio, SWRI, that may have programs for your level. You could also just jump into an open source citizen science project if you have your heart set on free labor, there are so many out there.

➕ Mark Goldwater
👍 Jaanak, Carly Batist, Jaagat P.
Jaanak (jaanak007@gmail.com)
2022-07-28 00:41:20

*Thread Reply:* Thank you all very much for the guidance; I greatly appreciate it. I will also definitely consider looking around for paid opportunities as well!! 🙂

✅ Jaagat P.
Julia Marisa Sekula (jmsekula@stanford.edu)
2022-07-26 09:46:52

Hello Everyone!! So great to find a community of people working on similar issues. I’m here as @Lauren Gillespie was kind enough to invite me! I’m an MBA/Msc in Nature-Co Design and Climate Tech student at Stanford University. My background is 6 years in financial markets (distressed + special situations) and the last 3 years I spent in policy and climate tech in the Amazon Rainforest in Brazil :flag_br: (which is where I’m from - any Brazilians or Latin Americans please reach out). I’m particularly interested in genome sequencing and finding ways to collect, systematize, understand nature’s intelligence (and in our current systems, quantify nature’s economic value). I’m also currently developing Obvious Venture’s Synthetic Biology for Climate thesis. If anyone is looking at the Amazon and/or wants to talk ideas - I’d love to chat.

👋 Sara Beery, Andy Viet Huynh, Catherine Villeneuve, Marconi Campos, Jason Holmberg (Wild Me), Jaagat P., Mark Goldwater, Lauren Gillespie, Daniel Spokoyny, Suzanne Stathatos, Declan, Emilio Luz-Ricca, Jaanak, Lily Xu, nyakundi lamech
🌎 Belen Saavedra, Jason Holmberg (Wild Me), Jaagat P., Andy Viet Huynh, Lauren Gillespie, Lily Xu
👏 Ben Wilcox
Toryn Schafer (tschafer@tamu.edu)
2022-07-26 17:28:04

Hello, I've been lurking for a few months after learning about @Sara Beery’s work through the academic job market. I am an incoming assistant professor in the Statistics department at Texas A&M University. I'm broadly interested in machine learning in ecology and focus primarily on movement ecology using inverse reinforcement learning. Sara and I are actually scheduled in the same symposium for the upcoming TWS Annual Conference 2022. I hope to be more active in this community!

👋 Suzanne Stathatos, Sara Beery, Declan, Carly Batist, Andy Viet Huynh, Elijah Cole (Deactivated), Dan Morris, Jaagat P., Belen Saavedra, Devis Tuia, Dhruv Sheth, Emilio Luz-Ricca, Armin Bazarjani, Jaanak, Lily Xu, Kaiyang, nyakundi lamech, Marcus Lapeyrolerie
Suzanne Stathatos (suzanne.stathatos@gmail.com)
2022-07-26 17:34:30

*Thread Reply:* Also, huge congrats on being done with the job market and landing a spot at A&M!

➕ Sara Beery
Toryn Schafer (tschafer@tamu.edu)
2022-07-26 17:42:42

*Thread Reply:* Thank you!

Sara Beery (sbeery@caltech.edu)
2022-07-27 10:10:04

New issue of JMLR out with a focus on climate change!! https://www.jmlr.org/special_issues/climate_change.html

💜 Arjun Subramonian (they/them), Lukas Picek, Mark Goldwater, Jason Holmberg (Wild Me), Subhransu Maji, Andy Viet Huynh, Declan, Gabriel Tseng, Jaagat P., Armin Bazarjani, Carl Boettiger, Catherine Villeneuve, Eddie Zhang, Jaanak, Lily Xu
🎉 Eddie Zhang, Kaiyang
Andrew Hartnett (a.t.hartnett@gmail.com)
2022-07-27 11:09:21

Hi everyone. I am a ML engineer currently working on models to understand and predict the behavior of roadway agents for autonomous driving. Previously I worked on problems at the intersection of physics, machine learning, and collective animal behavior (mostly shoaling fish). I’m exited to join this group to follow along with all your incredible conservation work, and to contribute on the ML side where possible.

👋 Sara Beery, Jason Holmberg (Wild Me), Toryn Schafer, Stefan Schneider, Andy Viet Huynh, Omiros Pantazis, Suzanne Stathatos, Emilio Luz-Ricca, Jaagat P., Declan, Jaanak, Lily Xu, nyakundi lamech, Dhruv Sheth
🐘 Blair Costelloe
Lukáš Adam (lukas.adam.cr@gmail.com)
2022-07-27 11:10:55

*Thread Reply:* I’m exited to join this group 😄 Anyway, welcome 🙂

Andrew Hartnett (a.t.hartnett@gmail.com)
2022-07-27 11:14:06

*Thread Reply:* PS … my image was the result of asking DALLE-2 to produce “a.i. for wildlife conservation digital art”

Andrew Hartnett (a.t.hartnett@gmail.com)
2022-07-27 11:14:56

*Thread Reply:*

🤖 Stefan Schneider, Andy Viet Huynh, Kakani Katija, Carly Batist, Dhruv Sheth
🐘 Stefan Schneider, Andy Viet Huynh, Belen Saavedra, nyakundi lamech, Dhruv Sheth
👏 Declan, Dhruv Sheth
Ben Weinstein (benweinstein2010@gmail.com)
2022-07-27 15:57:53

Anyone on the statistical side of machine learning want to give some thoughts here? I've been thinking and talking (@Sara Beery, @Brad Pickens, and others) about how the outputs of deep learning models will be integrated with existing policy decision making tools. If we count a million trees in an area and classify each to species, what are the types of uncertainty to attach to point count estimates? When are these intervals useful? My background in bayesian ecology work (w/@Heather Lynch) doesn't fully jive with the outputs of these models. Maybe @Bistra Dilkina? For those who want to read more, I enjoyed (https://arxiv.org/pdf/2104.12953.pdf, https://proceedings.neurips.cc/paper/2019/file/8558cb408c1d76621371888657d2eb1d-Paper.pdf). Here is my summary. Developing an accurate model for species prediction is the first step in operationalizing broad scale surveys. To illustrate how to convert the classification model into ecological information, we choose a simple task of counting the number of individuals of each species within a broad area. By assigning the species with the highest confidence score for each prediction, we can count the frequency of each class. However, this point estimate does not incorporate any uncertainty in these predicted counts. It gives no indication of the relative confidence among classes, and no way of assessing the reliability of predictions for future use. This makes it difficult to integrate into existing decision-making frameworks at the policy level. While creating prediction intervals for deep learning classification scores remains an open area of research (Ovadia et al. 2019, Lai et al. 2021, Abdar et al. 2021), there are three main avenues of uncertainty that can be assessed, 1) 'epistemic' uncertainty is the process noise from model training and optimization and can be assessed by training the model with different starting initializations and using the range of predictions as a confidence interval, 2) 'aleatoric' uncertainty is the process noise inherent in the data and can be assessed through cross-validation of the training/test data, 3) the per sample uncertainty measured by the calibrated confidence score given by the model. While a truly Bayesian perspective to deep learning is either computationally difficult or requires significant alteration to existing models (Kendall and Gal 2017), the confidence scores outputted by most deep learning systems can be calibrated to better reflect the probability of correct classification (Guo et al. 2017). After calibration, we can simulate from these probabilities and multiply each misclassification by the corresponding row in evaluation confusion matrix to create coarse confidence intervals for counts at broad scales. We compare these measures of uncertainty and discuss their relevance for downstream applications and management frameworks.

👍 Casey Youngflesh, nyakundi lamech
Heather Lynch (heather.lynch@stonybrook.edu)
2022-07-27 16:17:36

*Thread Reply:* @Ben Weinstein I don't have a good answer for you but your own application is identical to one of ours, which is identifying seal species classified from a computer vision model to get a good estimate of species-specific abundance. There are two layers of uncertainty: 1) the classification of an seal (i.e. is there anything here at all?), and 2) its classification to species (if yes, what is it?). We've had a working group here at IACS of people interested in this exact question at the interface of ML and Bayesian inference because we are now combining ML classification with population models. Unfortunately, the two disciplines are quite far apart and a common understanding of error propagation methods is difficult to come by.

👍 Ben Weinstein, Sara Beery, Casey Youngflesh, Olivier Gimenez, Devis Tuia
Beckett Sterner (bsterne1@asu.edu)
2022-07-28 12:59:50

*Thread Reply:* Don't forget that species classifications are often disputed and have substantial consequences for conservation planning/decisions! Value choices and uncertainty enter into the classification model from multiple inputs sources

👍 Sara Beery
Luke McEachron (lucas.mceachron@myfwc.com)
2022-08-02 09:51:03

*Thread Reply:* @Ben Weinstein When thinking about “When are these intervals useful?”, could you approach this as a model selection problem? Specifically, could you allow an end-user to make various model assumptions to predict occurrence, then demonstrate performance relative to alternative models within a tool framework? Happy to discuss more.

Paul Allin (allinpaul@gmail.com)
2022-08-26 07:40:49

*Thread Reply:* For my research I am looking at something similar, large terrestrial mammals in savanna biome. The first portion will be focused on establishing accuracy and precision from known populations. Once this has been determined it should be possible to scale up (assuming similar characteristics) and gather reliable data on population sizes and distributions. For vegetation I assume you're using satellite imagery and EVI/NDVI for some kind of semi-supervised classification?

Ben Weinstein (benweinstein2010@gmail.com)
2022-09-07 00:30:39

*Thread Reply:* I wanted to add this paper here because it was thoughtfully done.

👍 Sara Beery
Jitendra Hushare (jhushare@doc.govt.nz)
2022-08-01 20:27:56

Hello, I am working as an IT (Enterprise) architect with the Department of Conservation (DOC), New Zealand government. In the past, DOC staff captured thousand/millions of images that are being processed manually or semi-manually in isolated pockets/teams. We are eagerly looking for a working and implemented ML/AI-based enterprise solution for our staff to identify the animals, especially cats, mice, stoats, and possums (from the images) to meet our target of predator-free NZ by 2050. We are open to collaborating, adopting, developing and investing in such a proven suitable solution. Will greatly appreciate your thoughts, ideas and leads to help us solve this enormous problem. Feel free to connect or email me jhushare@doc.govt.nz. Thank you so much.

👍 Kakani Katija, Henrik Cox (Sentinel), Sara Beery, Rio Akbar, Andy Viet Huynh
Muskan Sachdeva (muskan@meta-lynx.com)
2022-08-02 08:30:12

*Thread Reply:* Hi Jitendra, I should be able to help you with that, I am working on a Data curation platform; sending you an email with the information

😎 Jason Holmberg (Wild Me), Sara Beery, Rio Akbar
👍 Jitendra Hushare
Henrik Cox (Sentinel) (henrik@conservationxlabs.org)
2022-08-02 13:47:52

*Thread Reply:* Hi Jitendra, this sounds fantastic! Also just sent an email to see if there's a good fit with the work we're doing

👍 Jitendra Hushare
Nicholas Osner (nicholasosner@gmail.com)
2022-08-03 03:59:10

*Thread Reply:* Hi Jitendra, I believe we at WildEye could help you out with TrapTagger - our free-to-use and open source platform. I have sent you an email with more details.

Ben Weinstein (benweinstein2010@gmail.com)
2022-08-08 12:00:54

Are there any students looking for a research project? I had a field researcher from french guiana write me about ant identification. Lots of interesting questions on taxonomic classification, multiple backgrounds. In general I get alot of these requests and don't know how they connect to the wider community. I can imagine starting from iNaturalist and then bringing in local domain knowledge? I think it really speaks to our need to go beyond the individual projects for each researcher.

👍 Sara Beery, Kakani Katija, Rowan Converse, Andy Viet Huynh
Ritwik (rittyun@yahoo.com)
2022-08-10 11:02:01

Hi, general question. When training object recognition models how do you handle the background class? when there are no objects of interest in the image. e.g. in pytorch, the label 0 is reserved for the background label but it's not clear what to do with the bounding boxes. they can't be 0 area as it leads the fractions to be invalid, and any other box (even if as large as whole image) can be a misleading label. may work if the background is more or less constant like fixed camera traps, but in general it's not clear to me how to deal with the background.

Chinmay Talegaonkar (ctalegaonkar@ucsd.edu)
2022-08-10 11:12:08

*Thread Reply:* In terms of the loss function, you can try to use focal loss for single shot object detectors. For 2 stage, you don't need to handle that. In terms of data processing, bounding boxes are usually regressed as anchor box offsets, and anchor box locations are fixed. Which is independent of the class label. One thing you can try, is use an indicator variable and don't penalize the loss on the bounding box prediction if the pixel is labeled as background.

Benjamin Kellenberger (benjamin.kellenberger@wur.nl)
2022-08-10 11:27:26

*Thread Reply:* Hi! What typical object detectors do is to simply ignore all the bounding boxes that are predicted for background. The way this is done depends on the model type: • Two-stage detectors (e.g., Faster R-CNN) check the intersection-over-union of each detected bbox with the ground truth; if it falls below a threshold it gets ignored. • One-stage models (esp. YOLO) predict bboxes in a grid over the image; here, the bbox that is predicted in the same grid cell as the ground truth (and has best overlap) is used for training; the rest gets ignored. What is always trained is the class score (and objectness, if available), because these are defined for every bbox (same principle as above).

By the way, 0 is not automatically reserved for background. This depends on the implementation—for example, Detectron2 uses 255 for the background. Also, some models have no background class at all: RetinaNet has sigmoid-activated class outputs and thus does not need to predict an extra background class.

👍 Justin Kay, Sara Beery, Ritwik
Daniel Davila (daniel.davila@kitware.com)
2022-08-10 12:17:21

*Thread Reply:* You can also look at some of the newer anchorless approaches (e.g. FCOS, Centernet2, etc...) which densely classify each pixel, and then apply the bbox loss to just the positive pixels.

Ritwik (rittyun@yahoo.com)
2022-08-10 12:58:48

*Thread Reply:* thanks a lot everyone. makes much more sense now. i was following the torchvision FRCNN tutorial to implement this and they mention to set the background class=0 (as reserved and not to be used for other classes) but don't say anything more on it.

so basically if the RPN network in a 2 stage model has no "objectness" boxes passing the criteria then the class-box prediction network has nothing to predict and would return nulls

Ben Weinstein (benweinstein2010@gmail.com)
2022-08-10 18:53:04

has anyone used https://labelstud.io/? I'm starting to looking into open source tools (or with cheap 'enterprise' add ons) where we can start creating active learning and model verification environments. We have been using zooniverse (both public and private) or locally annotating in QGIS for a few years and I think we are maturing beyond those tools where we want direct model integration and some control. Or atleast a toy model we use to guide annotations, which we than download and develop more novel models offline (more likely). I could build something from scratch (or piggyback on awesome tools like AIDE @Benjamin Kellenberger). Just a hypothetical for the moment.

labelstud.io
👏 nyakundi lamech
Sara Beery (sbeery@caltech.edu)
2022-08-10 19:05:10

*Thread Reply:* We've been working on something like this with @Tom Bernardin and @Caleb Robinson for remote sensed building detection

😎 Jon Van Oast, nyakundi lamech
Ben Weinstein (benweinstein2010@gmail.com)
2022-08-10 19:22:29

*Thread Reply:* sounds good. I think my general question is whether we should be building something or trying to rally behind some reasonable open source starting point.

👍 Matt Weldy
Tom Bernardin (tbernard@umass.edu)
2022-08-10 21:20:57

*Thread Reply:* Hi Ben, I could connect you with my team if you want a demo of Caleb's tool. It is open source, as are (or will be) the modifications we made for our use case.

Devis Tuia (devis.tuia@epfl.ch)
2022-08-11 03:49:17

*Thread Reply:* Ben, we can definitely talk for AIDE, to see if it can align to your needs. If you want we can setup a meeting together with @Benjamin Kellenberger.

Frederic (frederic@apic.ai)
2022-08-11 03:59:36

*Thread Reply:* I am a big fan of CVAT and can help you if you would need some support or insights. They have a lot of integrations for automatic labeling functions using different frameworks or completly custom code. Good api, … https://github.com/openvinotoolkit/cvat

Website
<https://cvat.org>
Stars
7710
👍 Timm Haucke, Howard L Frederick
Ritwik (rittyun@yahoo.com)
2022-08-11 04:50:39

*Thread Reply:* i've just started using label-studio and really like it.. for the specific question, i would tend to lean towards using existing opensource tools.. simply because, like in case of label-studio, it does all the heavy-lifting allowing you to spend time and resources on your main goal rather than re-inventing the wheel.. it was even so easy to deploy it on the cloud and serve it to annotators remotely..

Matt Weldy (matthewjweldy@gmail.com)
2022-08-11 12:33:58

*Thread Reply:* I keep checking to see if label studio has added a spectrogram option to their audio annotation features. There has been a ticket on their github for quite awhile.

Amrita Gupta (agupta375@gatech.edu)
2022-08-11 12:50:59

*Thread Reply:* I've been using the free version of Label Studio for both text annotation and image pair annotation, albeit without any model integration to speed up/guide labeling. Overall it's not a bad starting point, but I do think a lot of useful functionality is only available in the enterprise version, which is a little frustrating.

Ben Weinstein (benweinstein2010@gmail.com)
2022-08-11 14:34:10

*Thread Reply:* @Amrita Gupta do you get the sense that the model integration is possible without the enterprise version? It was unclear from the docs. In general i'm leaning towards label studio, but I just got word today that a potential next project has been using CVAT and i'll probably be asked not to re-invent too much, @Frederic I can imagine three avenues for model integration, 1) deciding which images are shown next, 2) pre-label images with existing markers, either from a live model, or a batch upload, 3) decide what portion of the image is to be shown, perhaps integrating with cloud geotifs. These are going to be large airborne tiles with mostly background. Does that sound about right?

Amrita Gupta (agupta375@gatech.edu)
2022-08-11 14:46:36

*Thread Reply:* Yes model integration is possible with the free community edition (but you have to wrap your model code into a LabelStudio ML backend server class). You can then train/fine-tune models and get preannotations, which is enough to "hack" batch AL functionality. Online AL with continuously training models seems to be only available through the Enterprise version.

Ben Weinstein (benweinstein2010@gmail.com)
2022-08-11 16:46:27

*Thread Reply:* Thanks @Devis Tuia, @Benjamin Kellenberger had talked about this sometime in past. I see three avenues for active learning (copied from above) 1) deciding which images are shown next, 2) pre-label images with existing markers, either from a live model, or a batch upload, 3) decide what portion of the image is to be shown, perhaps integrating with cloud geotifs. I wasn't sure whether we should go the truly open source route (AIDE, CVAT) or try to hack into a more commercial application. I haven't even gotten the final funding for this yet, so i'm just in the information phase.

Frederic (frederic@apic.ai)
2022-08-12 04:24:19

*Thread Reply:* @Ben Weinstein Regarding your questions, cvat was not designed for this active learning approach, anyhow using its api or db access you could integrate something that might work. Anyhow the speed might not be great.

1) deciding which images are shown next While having a loop fetching labels from the api, you could add images to a job based to the finished annotations. Possible issues: the browser of the client might have a cashed version of the task and need to retrieve the changes.

2) pre-label images with existing markers, either from a live model, or a batch upload, Pretty easy and you can use different approaches:

  1. use the api to create tasks, upload images and annotations.
  2. If its a commen model and task you can just use the serverless functions and replace the example models with your models. 2.1 Since https://nuclio.io/ is used for serverless function, you can build your own nuclio container for labeling.

3) decide what portion of the image is to be shown, perhaps integrating with cloud geotifs.

  1. use the api to upload cropped geotifs
  2. You might be able to set Zoom block via the api. but not on a individual image basis. 😕 -> GeoTIFS and image tiling is not supported right now see, but you can implemented and get help from the community https://github.com/openvinotoolkit/cvat/issues/531

Overall CVAT might not be perfect but allows you to hack it, to make it work 🙂

Josh Veitch-Michaelis (j.veitchmichaelis@gmail.com)
2022-08-18 18:57:52

*Thread Reply:* Bit late to the party, but for what it's worth...

I've used both CVAT and Label Studio, and we've been using a commercial platform at work. We're also looking into active/unsupervised learning. In the past I've written labelling tools out of frustration with some of the limitations of open source stuff. My gut feeling is that it's sensible to split out functionality to the tools that do it best (e.g. storage, labelling, ML annotation+inference and active learning/sampling). The risk is that a lot of tools are almost there, but they cater to common use-cases that don't have a huge amount of complexity and none of them do everything perfectly.

Label studio is pretty good. I've used it in a pinch to do some semantic segmentation, though there were some funny GUI issues if you had overlapping polygons. I think for most part all of these tools are pretty good at tagging and bounding box annotation. The differentiators from a labeller perspective are around how quickly you can segment objects. CVAT is also good and has a usable, if tersely documented, REST API. Error handling is... cautious? For example if you upload images into a project and then upload labels, you must have labels for all images (even if empty) or the importer breaks.

In terms of integration and on the "ops" side. I think there is a lot of smoke and mirrors from commercial platforms about exactly what is implemented and how well - especially when they're charging you $100+/seat/month. A lot of companies will try and sell you black-box ML integration which can work very well within their walled gardens, but have issues like:

• Can we adjust anything about the model? Often no. • What model you use? Proprietary, but probably Detectron2 • Does it work with some weird TIFF that we have? Maybe • Does the service offer image tiling for large inputs, as is common in remote sensing? Often not. • Can we take out the model for use elsewhere/can we self host? No, or pay us a lot of money. • Can we specify exactly which splits are used during training? Often no • The list goes on. Most of these are non-negotiable unless you have a huge contract and pull on where development goes. At least with open source you can see how the sausage gets made. The downside with larger projects like Label Studio and CVAT is that even if you have the technical capacity to PR things, it can take a while to get integrated.

There are other toolkits like Lightly who focus more on the active learning side of things. A lot of their unsupervised learning code is open source. Some functionality may be available for free if you're a non-profit. They're based up the road from me in Zurich and I've met them - they're a nice competent team, growing fast. Also fiftyone which is supposed to be good for model assessment, but really struggled when I imported a moderately sized dataset into it so I can't say much there.

I think Frederic's last answer is sane. You can get away with a labelling tool that has a relatively complete API for file management (like CVAT) and offloading the sampling and annotation stages to your own code. Uploading annotations from a pre-trained model, before handing images to annotators, is what we do at the moment.

(3) is the difficult one to do well IMO. There are some companies who are selling GIS oriented labelling solutions (e.g. Picterra), but not cheap. I agree the simplest approach is to do that work offline and just crop/upload tiles. Though in principle there isn't any technical reason why you couldn't use something like rasterio to specify a window/windows into the image? At least some of this is offloaded in CVAT by datamuro, so you could look at adding an importer there that could take e.g. a single source image and a list of windows, and then have datamuro handle the tiling and forwarding to CVAT.

👍 Ritwik
💯 Howard L Frederick
Ben Weinstein (benweinstein2010@gmail.com)
2022-08-19 17:08:01

*Thread Reply:* I think this is a really excellent summary. We want control for own models for the active learning side. I will give label studio a try because it feels just insane to have to develop own tools for such a common process. If it looks like we just can't get the control we need, its either CVAT or AIDE. I asked a similar question last week to @gvanhorn who was giving a talk on Merlin and it is clear that Cornell have spent quite a bit of time developing their own tools from an existing open source repo.

Josh Veitch-Michaelis (j.veitchmichaelis@gmail.com)
2022-08-20 09:05:06

*Thread Reply:* At some point I think it'd be worth the remote sensing community pulling together on this, because it's clear there's common functionality that lots of groups need that isn't really provided by the standard open source tools. Maybe we (as a field) need to sit down and figure out what the requirements are and make some recommendations, and then decide on how it gets implemented - or at least document some best practices. There's a huge group of people that would benefit - basically the entire EO sector for a start. My experience talking to labelling companies is that remotely sensed imagery is a bit of an unknown to them - unfamiliar formats, processing paradigms, etc. Though many have said that they have customers who are requesting more geo-oriented features and I think we'll start to see more support in the future. Same goes for bioacoustics I guess, my limited understanding is there aren't great open source audio annotation tools at the moment (unless you want to just tag/cluster spectrograms).

(edit: addendum to this is that virtually all the GIS tools and geo libraries that would need to do the complicated things - stuff like GDAL - are open source, like how every AV/media company in the world depends on ffmpeg... So in that sense I think a lot of the hard work has already been done and this is mostly a data serving problem?

One major pro for commercial tools is that the interfaces are often really well polished because they have a team of frontend devs working to make things look good, and the labelling "experience" for lack of a better word is pretty good on some of these platforms. My worry about freemium tools like label studio are that you end up with features locked behind subscriptions and there's little incentive to make those public.)

Valentin Lucet (valentin.lucet@gmail.com)
2022-09-24 17:02:09

*Thread Reply:* I wanted to restart this thread to ask whether anyone had any advice on how to serve megadetector's results through something like label studio or fyftyone to multiple annotators for manual correction and further annotation of the results. It seems that one obstacle is to convert from the COCO CT format to the classic COCO format?

Abhay (abhaykash12@gmail.com)
2022-08-11 17:49:05

Hi folks! Dan Morris just added me and its great to be here! Wanted to quickly introduce myself - I've been working with Felidae Conservation Fund, a nonprofit out in SF-Bay Area for about a year. Last fall, I migrated their data pipeline to the cloud and built a human-in-the-loop annotation platform using Megadetector & Annotorious (over Django, Dropbox & GCP). If you're curious about the approach, I've written a post about it here.

I'm looking for feedback & opportunities for collaboration! Please let me know if there is any piece that I can abstract out and open-source! We do have plans to open source some of the data after it gets cleaned up as well!

(Some broader context - My background is mainly in computing and AI/ML/DL but applied to NLP & RecSys and I also do full-stack web development. While I used to do more hands-on ML research, I've zoomed out to ML system design and in this domain, my focus is mostly on building usable plug-and-play open-source software layers on top of existing models)

abhaykashyap.com
🎉 Dan Morris, Stephanie O'Donnell, Andy Viet Huynh, Talia Speaker, Jason Holmberg (Wild Me), Ed Miller, Carly Batist, Alexander Robillard, Timm Haucke, Alan Ma, Rita Pucci, Sara Beery, Jeff Reed
❤️ Suzanne Stathatos, Sinan Robillard, Zara McDonald, Lily Xu, Sara Beery
👍 Alexander Robillard, Olivier Gimenez
Dan Morris (agentmorris@gmail.com)
2022-08-11 18:15:17

*Thread Reply:* Welcome!

To your point about public data release, when FCF has data that is ready to share publicly (minus location information, and minus images of people), we would be happy to help get that up on lila.science (I'll even volunteer to do whatever data munging is required to get it into the same format we use for other camera trap data sets on LILA).

Abhay (abhaykash12@gmail.com)
2022-08-11 18:24:56

*Thread Reply:* Thanks Dan! 🙂 We'll keep you in the loop through that journey! Luckily, WildePod was set up with continuous model training in mind so it should be easy to export it out into COCO or other standardized formats for LILA!

Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2022-08-11 19:16:52

*Thread Reply:* WildePod is very cool!

🙌 Abhay
Ed Miller (ed@hypraptive.com)
2022-08-11 19:36:04

*Thread Reply:* Great post, @Abhay! I am on a similar journey myself with the BearID Project. While BearID is primarily focused on camera trap data, I am currently working on a full-stack web app to annotate and identify wild bears on the Explore.org web cams in Katmai National Park. Even though it would be better to adapt something, I have succumbed to the urge to build something from scratch, mainly to build my web skills and take advantage of being an AWS Community Builder. I have been documenting my journey on this blog. I was considering the , but I'll have a look at Annotorious as well.

explore.org
DEV Community
robots.ox.ac.uk
Ed Miller (ed@hypraptive.com)
2022-08-11 19:37:55

*Thread Reply:* Based on your screenshots, I need to up my UX game! 😄

Ben Weinstein (benweinstein2010@gmail.com)
2022-08-11 19:46:20

*Thread Reply:* @Ed Miller and @Abhay have a look at the thread I started yesterday on annotation (https://aiforconservation.slack.com/archives/CLWGQ4BJ6/p1660171984124629). It feels like we are in need of some community development here. I sense that as a community we are moving from "does computer vision work for ecology" which was the question a few years ago (and motivated this work https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/1365-2656.12780) to "how do we develop iterative pipelines that tangible change long-term monitoring programs". I was giving a talk to @Sara Beery summer school on this kind of shifts in the community this week. I'm going to both read of your materials and try to write some summaries for the entire group next week. There are alot of competing goals and platforms here. @Jon Van Oast @Jason Holmberg (Wild Me) and wildbook have something to contribute here as well. Might be the inspiration for a small paper. I'm writing a grant on active learning this week and welcome input on how/should/if we should make something that aims for a more universal approach (is this even a maintainable/worthwhile goal) or ecological workflows are just too varied and sharing knowledge and research aims into active learning is a better strategy than tool building. i'm 100% open to thoughts, @Benjamin Kellenberger and I had a brief conversation surrounding this earlier today.

} Ben Weinstein (https://aiforconservation.slack.com/team/UMDLZLL1K)
🎉 Jon Van Oast, Sara Beery, Abhay
Abhay (abhaykash12@gmail.com)
2022-08-11 22:22:07

*Thread Reply:* Thank you! Also, if y'all want access to the system to get a feel for it, please DM me your email ids and I can add you in!

@Ed Miller - That is pretty neat! Happy to share my Annotorious code if you want. It is extremely simple and I'd highly recommend it especially when the number of options aren't many. You can even customize the UI a fair bit. In the gif that you see, it's simply using Bootstrap5 classes. And thanks, my UI is Bootstrap5 so the entire scss is maybe 100 lines mostly for colors and fonts 🙂

@Ben Weinstein - That was a pretty interesting discussion because they were essentially the same questions I went through when I was building this! I've gone about it in depth in the post but I'll add a TL;DR with some more thoughts that is very much in line with the discussion in the thread. So the broad context here is that most of the design & dev happened last Fall. I'm very strongly against building anything so at the time, Wildlife Insights was an option I seriously considered (other one was Zamba) but decided against it for two main reasons.

  1. The big one was that the nonprofit needed a more generic org-management system for inventory, the camera traps out in the field, its status of use, when it needed to be checked, potentially emailing volunteers to go check it etc. For this, I'd have to build a web app anyway and Django came with all this out of the box so it was not really any work.
  2. Next was access to models which is something you've touched on in the thread. I wanted an online-learning human-in-the-loop system. Since Megadetector was already mature, we wanted to use it and not even tune it since it was very likely they'd put out upgrades (as they did a month ago). So our goal was species detection where it seemed like the models were still not very mature. So the intent here was to create a basic model and then, as annotators did their job, have the model update at a regular cadence. Since other volunteers fell through and I got busy, we didn't get to it but do intend to do it this fall if there isn't already an existing solution. I'm not sure when Wild Me came about but I did learn about it early this year but WildePod was already up. The core goal for me was data standardization, ideally with object-level annotations from their existing images & camerabase annotations. After that, the hope was (and is) to keep useful pieces of WildePod and migrate elsewhere!

I strongly agree with the opinions in that thread about tool fragmentation. When I was first reading through Dan's blog post, I was surprised at how many custom solutions existed but it made sense given how nascent this is. Usually the tech matures first before the products and from my perspective, it looks like the models have matured with Megadetector/CVPR workshops but the software layer around it has some ways to go.

When I built WildePod, my goal was a "Bring your own storage & compute" approach to keep the expense around the website low. So the app is simply a layer around Dropbox. This way, any open source solution won't have to bear storage costs and people can link their own dropbox accounts. Another thing here is compute for inference. The Django app simply hits an endpoint so technically you can host your own model. Only downside here is that you might have to deploy it to the cloud yourself. I don't have an answer to the latter since it is a trade-off between cost of compute vs labor-cost for end user in terms of deployment.

Stepping outside the build scenario, since this is also a common CV task, there is tooling aimed at enterprises that have matured (labelbox, labelstudio, roboflow, scaleai, google's vertex AI etc.). Here I'd skew towards something like that when the dataset & the org is larger and has capital or if the company sponsors research/nonprofits. I had a conversation with Joseph Nelson, who runs Roboflow (invited him to this group) who reached out and wanted to see how they can help. They've had success with it in the past as well. In an ideal world, if there are small set of paid & free options that folks rally around, it'll greatly help standardize the data underneath. I think the key is data standardization so researchers can seamlessly jump vendors.

I'd love to chat and learn more about this. I've mostly done this in isolation while consuming content from the web (like wildlabs summer seminar series which was great!) so I probably have a lot of blind spots. I'm curious to know if folks here sync occasionally over zoom? If not, would there be any interest in doing so at some cadence?

Ed Miller (ed@hypraptive.com)
2022-08-13 21:42:30

*Thread Reply:* @Ben Weinstein the thread and discussion are definitely of interest for BearID. I definitely think the time is right for convergence, at least on no-cost/low-cost labelling. For the Bearcam Companion application I am working on, a big part of it is for me to become more experienced with some of the technologies and tools for web development. I am very much building much of it from scratch. Longer term, and for the BearID Project more generally, the more standard tools we can utilize the better.

@Abhay I am interested in your code. I am especially interested in how you are saving data for different users (labelers). Can different users label the same image? In my case, for identifying the individual bear, I am allowing each user to select a label when editing, then provided a tabulation of "votes" in the main view, e. g. "480 Otis (4/6 = 66%)." I guess I could leave my current method for the top view and only use Annotorious for the editing. I will need to connect this to AWS DynamoDB through the Amplify DataStore API. It looks like I can use the Firebase as a reference.

Abhay (abhaykash12@gmail.com)
2022-08-19 01:24:06

*Thread Reply:* Sorry just saw this.!

Yup. Multiple users can label the same image. That piece of logic is fairly primitive at the moment. As of now, the system will throw images with the least number of votes at users. The users can then add/update/delete boxes. This actually goes into the system as a "vote" per box.

So when another user sees the same image, they will see a union of boxes created by machines and humans.

We have a hardcoded threshold for now (~2) as the vote difference. When that threshold is breached, boxes will not be shown anymore.

Right now the weights are uniform. As things mature, we'll very likely add some weights to annotators depending on tenure/success rate etc. to soften the hardcoded threshold.

Abhay (abhaykash12@gmail.com)
2022-08-19 01:26:06

*Thread Reply:* For the data part, since everything is in Django, it is simply writing it to and from the Django models. Underneath this, the database is actually Postgres hosted on CloudSQL. All this is fully abstracted through Django's ORM.

Depending on the infrastructure you're using you might find language specific ORMs to make write-read easier.

The biggest overhead here is usually CRUD on these tables. If the service makes it easy, then great. If you are maintaining a single row of data along the lines of <image> <user> <annotation> with mostly appends & no related models, I'd recommend skipping relational DBs altogether and write directly to a beefy spreadsheet like Airtable (it has a simple python API)

(In my case, I had to manage user accounts, inventory, camera traps etc etc which required a basic relational DB)

Ed Miller (ed@hypraptive.com)
2022-08-21 21:16:16

*Thread Reply:* @Abhay I integrated Annotorious to create/edit/delete the bounding boxes on my images. My website is build on AWS Amplify and React. It took me less than an hour to integrate with my UI and data store. I was able to add a label vocabulary, but the user has to start typing. How did you set yours up to only have the selection options you want?

For now, editing objects is an "admin" feature. I'm not dealing with multiple users yet. I'll need to expand my Objects tables to enable per users labeling. I already support that for the bear identification though.

Abhay (abhaykash12@gmail.com)
2022-08-22 12:26:13

*Thread Reply:* That looks great! Since MegaDetector only has three classes, I render them as buttons. If it is a long list, you're likely be left with the option list. You could even try a mix of the two by rendering the top-K options as buttons with the option to choose more from. You can do all this by customizing the widget [instructions here]. The example has the template to add any arbitrary html element you want (and also style them the way you want). (I also DMed you my widget to see what modification might entail)

recogito.github.io
Valentin Lucet (valentin.lucet@gmail.com)
2022-09-25 09:01:09

*Thread Reply:* Hi @Abhay! I was wondering whether the source code for this project is available somewhere? Your app demonstrates how to serve MD results to multiple users and I haven't seen any other examples like that.

👍 Sara Beery
Abhay (abhaykash12@gmail.com)
2022-09-26 17:18:56

*Thread Reply:* Hi Valentin. Just saw this. I have two repos up

  1. https://github.com/hayabhay/megadetector-fastapi - This is a fastapi wrapper with a dockerfile that is ready-to-deploy on Google Cloud Run. It can essentially run any MD models (you can choose which one to bundle in docker) and can arbitrarily scale to users with GCR. (this is already running for us). Only downside is that GCR doesn't support GPUs so you'll be doing inference on CPUs (~15ish seconds per image). It also comes with a Streamlit UI to quickly test things
  2. I also have an older version for MD v4 that was deployed as a Google Cloud Function. Again, its very similar to GCR but it doesn't require an api wrapper. Feel free to let me know if that works for you and if you have any custom requirements. I'll be more than happy to work with you to get MD up on the cloud (my knowledge is limited to Google Cloud Platform however)
🙌 Josh Seltzer, Valentin Lucet, Jason Holmberg (Wild Me)
Abhay (abhaykash12@gmail.com)
2022-09-26 17:56:09

*Thread Reply:* Also, in case I misread your question and If you're asking about the Django app that does human-in-the-loop annotations, it is right now in a private repo since it has some coupling with Felidae's branding. If you're interested, I can put out a ready-to-use open source version for it as well or if you're just interested to see the code and take pieces of it, I can add you to the org's Github. Let me know what works best. If you'd prefer to have a quick chat, let me know and I'll be happy to get on a zoom call with you! (also adding this to the wider channel in case there are more use cases)

❤️ Valentin Lucet, Sara Beery, Stephanie O'Donnell
Valentin Lucet (valentin.lucet@gmail.com)
2022-09-26 18:27:19

*Thread Reply:* Hi @Abhay! Thanks for the links, I've seen them a while back and already starred them 🙂 I am indeed asking about the django app side of things. I understand that a certain IP is attached with your app and that you may not be able to share it all. One of the options we are considering for our project is to run MD on a research cluster (on GPUs) for speed once, and then serve the results (the images with the bounding boxes) for review by multiple users in our research team. That would require building an app similar to yours that would just serve the images with the boxes, offer the user the ability to vote for a species ID (I like your voting system) and possibly other tags (sex, age class) and also edit/add bounding boxes if any are missing (hence my interest in your integration with annotorious). I don't know if that is clear, but basically my webdev skills are limited and I'm exploring the feasibility of that idea on my end. Seeing the ORM model structure of the django app would be very helpful for me for example.

Valentin Lucet (valentin.lucet@gmail.com)
2022-09-26 18:28:22

*Thread Reply:* So your offer of sharing ready to use version or to add me to the repo, as well as chatting, all or any of that would be of great help for me 🙂

Abhay (abhaykash12@gmail.com)
2022-09-26 18:34:41

*Thread Reply:* Oops! Sorry for the initial misread! And yes, that sounds great. We can do any or all of it depending on how you want to proceed. The larger codebase is a bit gnarly since it does have a fair number of ad-hoc product related patches. Regardless, I can walk you through the code as needed and/or extract specific bits of it to fit any existing apps you have.

To start, I can add you to the repo so you get a sense for it. Then, we can quickly jump on a call so I can get a better sense of what you're planning to do and from there we can figure out how best to move forward. Also, we have a new volunteer who is working on the Django side of things as well and there might even be room to build reusable components along the way.

Valentin Lucet (valentin.lucet@gmail.com)
2022-09-26 18:36:39

*Thread Reply:* Awesome, I will send you a DM!

Abhay (abhaykash12@gmail.com)
2022-09-26 18:36:59

*Thread Reply:* 🚀

🎉 Sara Beery
Swayam Thakkar (swayamt1302@gmail.com)
2022-08-22 23:55:15

Hello Everyone, I am a ML enthusiast currently pursuing bachelor's in computer science from MIT World Peace University, Pune, India and also working at the Wildlife Institute of India on camera trap data.

I am glad to be a part of this community !! Thank you @Sara Beery for adding me in !!

😎 Jason Holmberg (Wild Me), Dhruv Sheth, Jon Van Oast, Sara Beery
👋 Ed Miller, Dhruv Sheth, Abhay, Omiros Pantazis, Sara Beery, Lily Xu, Ritwik, Toryn Schafer, Dan Morris, nyakundi lamech, Alexander Robillard
👋:skin_tone_5: Ando Shah
Tjomme Dooper (tjomme@fruitpunch.ai)
2022-08-25 03:44:31

Hi everyone, I have been working on the AI for Wildlife Lab for a while now, but recently @Victor Anton was so kind as to point me here 🙏:skintone3: Excited to hear from all of the projects that are going on and to get to know the people working on them! Any recommended channels?

Maybe the FruitPunch AI community can help out here and there by crowdsourcing some ML work. If you need extra hands for data wrangling, engineering, or analysis, if you'd like to explore many more ML models than you have time for, hit me up 🐘🍉

🙌 Stephanie O'Donnell, Jason Holmberg (Wild Me), Carly Batist, Alexander Robillard, Abhay, Sara Beery, Ed Miller, Fadel, Sinan Robillard, Victor Anton
Paul Allin (allinpaul@gmail.com)
2022-08-26 07:24:46

Hi,

Paul Allin (allinpaul@gmail.com)
2022-08-26 07:26:24

I'm Paul from South Africa, working on my PhD in automating aerial animal censuses by applying ML to remote sensed imagery. Keen to get in touch with others and thank you @Devis Tuia for adding me!

🙌 Stephanie O'Donnell, Dan Morris, Caleb Robinson
👋 Omiros Pantazis, Carly Batist, Declan, Fadel, Abhay, Caleb Robinson, nyakundi lamech, Benjamin Kellenberger, Alexander Robillard
Devis Tuia (devis.tuia@epfl.ch)
2022-08-26 09:10:50

*Thread Reply:* welcome!

Ben Weinstein (benweinstein2010@gmail.com)
2022-08-26 09:56:49

*Thread Reply:* hi paul! what taxa are you working on.

Caleb Robinson (calebrob6@gmail.com)
2022-08-26 17:08:32

*Thread Reply:* Hi Paul, welcome to the community!

Paul Allin (allinpaul@gmail.com)
2022-08-27 05:35:22

*Thread Reply:* Thanks! I'm looking at large (>50kg) mammals

Ben Weinstein (benweinstein2010@gmail.com)
2022-08-27 10:00:07

*Thread Reply:* Out of curiosity, what happens when you apply our 'bird' detector to such images. https://deepforest.readthedocs.io/en/latest/bird_detector.html We'd like to formalize this towards an 'animal' detector in the next year or so.

Fadel (fadel.seydou@gmail.com)
2022-10-19 08:59:05

*Thread Reply:* Hello @Paul Allin, I'm Fadel from Switzerland (EPFL) currently working on my Master thesis on "automated aerial census of large herbivores". I would love to connect and discussion this topic with you and learn more about your work.

Paul Allin (allinpaul@gmail.com)
2022-10-22 03:17:55

*Thread Reply:* Hi Fadel, great to hear about your masters. Maybe easiest to set up a call. Can you WhatsApp me? +27712287116

💯 Fadel
Devis Tuia (devis.tuia@epfl.ch)
2022-10-23 14:01:11

*Thread Reply:* Hello @Fadel good to hear from an EPFL colleague 😄. In which lab are you?

Luke Sheneman (sheneman@uidaho.edu)
2022-08-26 11:41:40

Hi all - my name is Luke Sheneman and I work at the University of Idaho as Director of Research Computing. I am a co-investigator on an NSF DISES project using a camera trap grid deployed in Eastern Oregon that will be used to forecast interactions between predators and and ungulates/livestock, specifically in drought conditions. I am handling the AI side of things, including developing species classifiers and developing/deploying satellite-enabled edge AI. @Dan Morris pointed me here. Super excited to be a part of this community!

🎉 Dan Morris, Abhay, Toryn Schafer, Ethan Shafron, Alexander Robillard, Jason Holmberg (Wild Me)
👋 Carly Batist, Mark Goldwater, Omiros Pantazis, Benjamin Kellenberger, Carl Boettiger, Ed Miller
😎 Eddie Zhang, Sara Beery, Jason Holmberg (Wild Me)
Nicholas Osner (nicholasosner@gmail.com)
2022-08-30 10:26:36

*Thread Reply:* Hi Luke. I work on an open-source camera-trap-processing web application called TrapTagger . At present, we only offer an in-house southern-African-species classifier, with MegaDetector as an empty-image remover for our users outside of Africa. However, we are looking to add more species classifiers and are in the process of collaborating with a few organisations to host their classifiers for both their own use, and the use of others. Perhaps we can assist you by providing you with a GUI and annotation workflow to wrap around the species classifiers you are working on, or you could even just provide your classifiers for others to use free of charge if you like. Send me an email at nic@innoventix.co.za if you would like to set up a discussion around this.

Michael Bunsen (notbot@gmail.com)
2022-09-14 17:29:55

*Thread Reply:* Hi Luke, that sounds like an awesome project. I am based in Oregon but working remotely with several automated insect monitoring projects based in Montreal and the UK. I am handling the software and infrastructure side of things, but also hoping to meet some folks and setup some edge devices with partners in the Pacific Northwest.

Michael Bunsen (notbot@gmail.com)
2022-09-14 17:30:51

*Thread Reply:* I'll take a look at TrapTagger as well @Nicholas Osner! I am working on a desktop application with some similar functionality, but focused on moth and insect classification.

Nicholas Osner (nicholasosner@gmail.com)
2022-10-03 03:27:46

*Thread Reply:* Hi Michael, I'm sorry - I managed to miss your message until now. Let me know if you would like to have a chat about TrapTagger. Maybe you can make use of some our open source code. Worst case scenario, we have a bit of a chat about our respective projects. Let me know.

Abhay (abhaykash12@gmail.com)
2022-08-30 12:51:59

Hi folks! I was migrating to MegaDetector v5 over the past couple of days and the model looks great! Faster computation would mean lower cloud costs 🙂 As a thank you, I packaged MegaDetector with FastAPI and also added a Streamlit UI to quickly visualize & compare MegaDetector models. Please let me know if folks find this useful and also if you have any comments, feedback, suggestions! Github repo -> https://github.com/hayabhay/megadetector-fastapi In the upcoming weeks, I'll create some ready to use containers that can directly be deployed to Google Cloud Run for our purposes and share it in the repo as well. The migration itself was pretty smooth except for one gotcha - details in the thread. (cc: @Dan Morris - please let me know if it makes sense for parts of this to move to the MegaDetector repo)

👍 Mitch Fennell, Sara Beery, Yves Bas
Abhay (abhaykash12@gmail.com)
2022-08-30 12:57:44

*Thread Reply:* The biggest gotcha while migrating was using YOLOV5.

Since imports within the repo isn't relative, the repo root requires being in PATH/PYTHONPATH .

As a result, yolov5 packages aren't namespaced and having a top level module like utils will lead to non-obvious errors. So any referencing code must have it's own non-conflicting namespace at the root level.

Abhay (abhaykash12@gmail.com)
2022-08-30 12:59:39

*Thread Reply:* Also, there was a fair bit of setup in terms of cloning the repo, checking out a particular commit and setting up the env. I've replaced it by automatically downloading the archive zip file and setting the necessary env from within the MegaDetector code itself.

Dan Morris (agentmorris@gmail.com)
2022-08-30 14:10:52

*Thread Reply:* FWIW all of that stuff with checking out a particular commit and a particular version of PyTorch was working around this issue:

https://github.com/ultralytics/yolov5/issues/6948

...an incompatibility between certain versions of YOLOv5 and certain versions of PyTorch. It appears (in bold because I'm not 100% sure yet) that this has been resolved, i.e. that new versions of PyTorch and new versions of YOLOv5 now get along nicely and can run MDv5. YMMV. We'll update instructions if we're sure about that at some point.

👀 Abhay, Mitch Fennell
Abhay (abhaykash12@gmail.com)
2022-08-30 14:13:40

*Thread Reply:* Ah! Interesting! I did forget to mention that the version of PyTorch was also a gotcha and that was more to do with PyTorch's compatibility with yolov5 (even with the specific commit). I had to use these versions for it to work torch==1.9.1 torchvision==0.10.1

Dan Morris (agentmorris@gmail.com)
2022-08-30 14:14:20

*Thread Reply:* Also see this discussion:

https://github.com/ultralytics/yolov5/discussions/9138

I verified this in one (non-GPU) environment, and everything went fine. Need to verify in GPU environments.

👀 Abhay
Abhay (abhaykash12@gmail.com)
2022-08-30 14:25:23

*Thread Reply:* Ah! I think the latest torch version that I used (torch==1.12.1 ) triggered this error AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor' (for the specific commit as well). Looks like it is still an open issue.

Also, I see the No module named 'utils' was mentioned here (likely from not being in the path). For me, this was because i had a utils.py in my own API that prevented it from being imported (since I had yolov5 in sys.path before the my project root).

Namrata Deka (dnamrata@cs.ubc.ca)
2022-08-31 04:11:52

Hi everyone! I am Namrata Deka and I'm a MSc student at UBC. I'm working on a project to learn image representations that are invariant of non-causal/spurious distractors (eg. background, lighting, camera angle etc.). I would love to apply this on a conservation dataset and was wondering if anyone here is aware of a dataset with continuous (or high-dimensional) target labels like bounding boxes or segmentation masks that also has continuous (or high-dimensional) labels for a distractor that is correlated with the target in the train set and less correlated or independent in the test(OOD) set?

🐘 Tjomme Dooper, Sara Beery, Jason Holmberg (Wild Me)
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-08-31 08:37:42

*Thread Reply:* check out LILA BC - they have tons of animal camera trap, audio, etc. datasets that are labeled and ML-ready. https://lila.science/

LILA BC
Est. reading time
1 minute
Sara Beery (sbeery@caltech.edu)
2022-08-31 10:58:29

*Thread Reply:* Explicitly labeled distractors via bounding boxes or segmentation masks isn't something that exists already for any of the LILA datasets as far as I know, but camera trap data is static so labeling a fixed background object in a given camera would be quite lightweight

👀 Namrata Deka, Juan Sebastián Cañas Silva
👍 Namrata Deka, Tiziana Gelmi Candusso
Namrata Deka (dnamrata@cs.ubc.ca)
2022-08-31 11:09:11

*Thread Reply:* Thanks a lot for the suggestion. Good idea to take advantage of static images to add more labels. 🙂

Ed Miller (ed@hypraptive.com)
2022-09-04 15:01:51

I have published the Bearcam Companion web application along with the latest blog in the series. In this post I discuss how to use AWS Amplify Hosting and GitHub to create an auto-deployed web application to a custom URL. The blog post is here. The Bearcam Companion website is here.

DEV Community 👩💻👨💻
app.bearid.org
🎉 Jon Van Oast, Jason Holmberg (Wild Me), Carly Batist, Eddie Zhang, Stephanie O'Donnell, Marconi Campos, Abhay, Lily Xu, Anton Alvarez
:bearid: Dan Morris, Jason Holmberg (Wild Me), Stephanie O'Donnell, Abhay, Crystal Huang
Chris Yeh (chrisyeh96@gmail.com)
2022-09-05 22:31:59

Several questions for the ML community here:

  1. Are there any "state-of-the-art" techniques for transfer learning on a new set of labels? For example, training an animal species classifier for one set of species, then transfer to a new (but possibly overlappping) set of species. (By "state of the art", I mean any technique that isn't just fine-tuning the last layer of a neural net.)

  2. What is the state-of-the-art approach for handling hierarchical labels in ML models? MegaClassifier provides some post-hoc utilities to remap labels based on the biological taxonomy. But what about methods for leveraging hierarchical labels during training?

  3. What about transfer learning with hierarchical labels? Say your source and target label sets have overlapping hierarchies - is there a way to leverage that info without completely throwing away the last layer of your neural net?

This hierarchical transfer learning problem increasingly common in the projects I'm working on, but I don't have a good way to solve it. If anyone has worked on this problem before, please let me know what you've tried!

😊 Lily Xu
Chris Yeh (chrisyeh96@gmail.com)
2022-09-05 22:34:20

*Thread Reply:* Tagging some friends, in case y'all have any ideas and/or have read good papers on these topics: @Sara Beery @Lily Xu @Suzanne Stathatos

👀 Suzanne Stathatos
Mark Goldwater (mgoldwater@whoi.edu)
2022-09-05 23:06:21

*Thread Reply:* Out of curiosity re: 1., have you experienced any problems with simply fine-tuning? Are you trying to achieve better results than that approach provides? More efficient training that exploits overlap?

Beckett Sterner (bsterne1@asu.edu)
2022-09-05 23:39:22

*Thread Reply:* @Atriya Sen 👆

Chris Yeh (chrisyeh96@gmail.com)
2022-09-06 01:56:49

*Thread Reply:* @Mark Goldwater: correct, I'm asking about ways to achieve better results than simple fine-tuning

Devis Tuia (devis.tuia@epfl.ch)
2022-09-06 02:11:17

*Thread Reply:* I think Alexander Mathis has some solutions for transfer learning. They are learning global models (they call them “supermodels”) for pose estimation of different species (e.g. a model for canines, one for birds) that can then be finetuned . All based on deeplabcut. For efficient finetuning, we generally use our software AIDE, with pretrained models that can be finetuned using some active learning scheduling to minimize the tuning cost. Check with @Benjamin Kellenberger if you want more details.

Ben Weinstein (benweinstein2010@gmail.com)
2022-09-06 11:11:07

*Thread Reply:* not exactly the answer, but related: I am working on a fine-grained tree species classification paper that uses a set of nested hierarchical models, we found that pretraining on the flat model, then using that as a starting point for each hierarchical model was useful. Similarly, our 'bird' detection model has shown to be useful as a starting point for bird species classification.

👍 Sara Beery
Suzanne Stathatos (suzanne.stathatos@gmail.com)
2022-09-06 12:16:56

*Thread Reply:* I second @Benjamin Kellenberger as the right person to ask on this. Generally what I've seen is that fine tuning works just as good as if not better than complex methods for transfer learning - though I could be in the same boat as you.

@Elijah Cole (Deactivated) has worked a lot on understanding the costs/benefits on label granularity I.e his paper here. He probably has good insight on (3) the hierarchical labels aspect.

arXiv.org
👍 Chris Yeh, Lily Xu
Chris Yeh (chrisyeh96@gmail.com)
2022-09-06 13:35:05

*Thread Reply:* Thank you all for your responses!

@Ben Weinstein: how are you training the "set of nested hierarchical models"? And are those hierarchical labels different from the flat pretraining?

@Devis Tuia: sounds interesting - do you have any paper/publication you can point me to?

@Benjamin Kellenberger: everyone seems to be tagging you - would love to hear your thoughts on this!

Ben Weinstein (benweinstein2010@gmail.com)
2022-09-06 14:43:23

*Thread Reply:* I don't know if this figure will be useful yet, but here is the conceptual figure from the draft. They are the same labels in both architectures, that's why I said it wasn't exactly what you were looking for. So training the flat model for all classes and then using that as the starting point for each of the models on the left.

❤️ Suzanne Stathatos
Chris Yeh (chrisyeh96@gmail.com)
2022-09-06 15:01:30

*Thread Reply:* @Ben Weinstein: Is this understanding correct? You first train a classifier to predict the flat labels. Then you fine tune multiple instances of the classifier to focus of subsets of the flat labels. Finally, you combine these fine-tuned models in a hierarchical way. (But the individual models themselves are not hierarchical?)

Ben Weinstein (benweinstein2010@gmail.com)
2022-09-06 15:07:05

*Thread Reply:* yes I think thats reasonable. When you say the individual models are not hierarchical, that interests me, if you mean that they have separate optimizers then yes. I've actually been really interested to see if anyone can show me a mixture-of-experts type hierarchy that is co-trained with a single combined loss function. Just for implementation, it feels confusing how to feed batches into such a network. We found it much easier to train separately the pieces and create separate dataloaders for each. Its definitely less efficient.

👍 Valentin Gabeff
Devis Tuia (devis.tuia@epfl.ch)
2022-09-06 15:12:14

*Thread Reply:* @Chris Yeh, for the reference to AIDE: https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.13489. For a reference to the supermodels, maybe @Valentin Gabeff can you provide one from Alex’s team (Alex is not in the slack channel)?

👍 Valentin Gabeff
Valentin Gabeff (valentin.gabeff@protonmail.ch)
2022-09-06 18:29:35

*Thread Reply:* Alexander pointed me to this pre-print: https://arxiv.org/pdf/2203.07436.pdf

The approach is a bit different as applied to animal pose estimation but the general design shares some ideas with what has been mentionned: Pre-train a 'SuperAnimal' model that performs well on different species, and then fine-tune it for a specific dataset (or in this case it would be fine-grained species) to improve performance. (see details about gradient masking and pseudo-labeling in the pre-print)

And no, Alexander is not here (yet?) ;)

👍 Ben Weinstein
Ben Weinstein (benweinstein2010@gmail.com)
2022-09-06 20:44:02

*Thread Reply:* this is a really nice paper and something we would like to move towards for UAV-based object detection.

👍 Devis Tuia
Atriya Sen (atriya@atriyasen.com)
2022-09-07 12:21:59

*Thread Reply:* You may be interested in our paper here: https://ojs.aaai.org/index.php/AAAI/article/view/17750, where we combine learning with hierarchical labels and automated reasoning over taxonomies.

Lily Xu (lily_xu@g.harvard.edu)
2022-09-19 18:06:50

*Thread Reply:* Hi Chris! Very late to respond here and I think others on this thread would be much more helpful on supervised learning tasks (I've focused much more on planning)

But in the planning literature, there may be some interesting ideas to explore: • learning diverse skills with RL (very related to transfer learning): this approach uses maximum entropy to train a base model, then use that pretrained model for specific downstream tasks. hierarchical RL to help solve more complex tasks https://arxiv.org/pdf/1802.06070.pdf • multi-task RL: uses "soft modularization" to train policy with shared parameters on multiple different tasks https://arxiv.org/pdf/2003.13661.pdf And one idea in the supervised learning world I really like is MIMO, multi-input multi-output. The idea is to use a single NN to act as an ensemble with N multiple inputs and N multiple outputs. Then you average the outputs as your final prediction. It's nice because it helps with stability (benefits from ensemble) and requires minimal extra storage or computation. There may be variations of this NN architecture that you could adapt to transfer learning skills. https://arxiv.org/pdf/2010.06610.pdf

😍 Suzanne Stathatos
Holger Klinck (hk829@cornell.edu)
2022-09-09 09:29:27

Hi everyone,

We are looking for new team members for our BirdNET team (birdnet.cornell.edu)! These are full-time, 3-year positions. Specifically, we are looking for:

An Ecologist: https://www.tu-chemnitz.de/verwaltung/personal/stellen/257080_4_EPu.php

A Data Scientist: https://www.tu-chemnitz.de/verwaltung/personal/stellen/257080_5_EPu.php

An Embedded Systems Engineer: https://www.tu-chemnitz.de/verwaltung/personal/stellen/257080_6_EPu.php

The job descriptions are provided in German and English (scroll down to the bottom of the page for the English version). These positions will be hired through Chemnitz University in Germany. Remote work is possible; however, successful international applicants will be required to obtain a German work permit.

If you have questions about these positions, please get in touch with Stefan Kahl @ sk2487@cornell.edu.

Cheers,

Holger

👍 Oisin Mac Aodha, Stephanie O'Donnell, Frederic, Andy Viet Huynh, Jaanak, Josh Veitch-Michaelis, Rita Pucci, Sara Beery, Yuanqi Du, Amandine Gasc
👍:skin_tone_2: Swayam Thakkar
💯 Carly Batist
Sara Beery (sbeery@caltech.edu)
2022-09-12 18:44:39

Hi to all the new faces on here!! Feel free to introduce yourselves 😄

👍 Jaanak, Yuanqi Du, Jason Holmberg (Wild Me), Jason Parham, Dhruv Sheth, Francisco Carrillo Pérez
💯 Jon Van Oast, Jason Holmberg (Wild Me), Heather, Francisco Carrillo Pérez
Francisco Carrillo Pérez (carrilloperezfrancisco@gmail.com)
2022-09-13 03:01:52

Hi all! 👋 my name is Francisco Carrillo Perez, and I am a last-year PhD student at the University of Granada, Spain, and a Visiting Researcher at Stanford University, USA! I work in multimodal classification and multimodal generative models applied to bioinformatics problems (mainly cancer data!) but I am really interested in conservation, and I want to learn more about how AI can help. Looking forward to it!

👋 Devis Tuia, Riccardo de Lutio, Benjamin Kellenberger, Declan, Sara Beery, Suzanne Stathatos, Dan Morris, Lucia Gordon, Andy Viet Huynh, Jason Holmberg (Wild Me), Anton Alvarez, Lauren Gillespie, Rita Pucci, Chris Yeh
Thiên-Anh Nguyen (thien-anh.nguyen@epfl.ch)
2022-09-13 04:02:43

Hi everyone! I'm Thiên-Anh, a PhD student at @Devis Tuia's lab at EPFL (Switzerland). I am working on monitoring and understanding treeline dynamics in the Swiss Alps using remote sensing and deep learning 🌲. Methods-wise, I'm interested in explainable deep learning, semantic segmentation, time series, noisy/multi-sensor data, domain adaptation and many more (basically all the challenges we face when using data from the real world)... I'm not working exactly in conservation, but I'm interested in all the methods and projects discussed here, and hope I can bring something to the discussions! 👋😊

🎉 Robin Zbinden, Benjamin Kellenberger, Andrés C Rodríguez, Sara Beery, Dan Morris, Jason Holmberg (Wild Me)
👋 Diego Marcos, Riccardo de Lutio, Benjamin Kellenberger, Sean Nachtrab, Robin Zbinden, Declan, Sara Beery, Suzanne Stathatos, Andy Viet Huynh, Lucia Gordon, Jason Holmberg (Wild Me), Lauren Gillespie, Rita Pucci
🌳 Burak Ekim
Sean Nachtrab (sean.nachtrab@gmail.com)
2022-09-13 08:17:57

Hello all, I'm a machine learning engineer. I've been following this slack for a while and am very interested in what's being worked on! I work in the rail sector on autonomy at otiv.ai in Belgium but am American. I've been interested in climate, ecology and environment since my BSc at King's but the pandemic encouraged me to be pragmatic with my employment choice upon finishing uni. I have a few friends in ecology/environmental PhDs who do really interesting stuff and want to help!

I've spent the last two years in industry working on sensors (camera, lidar, ...) and computer vision (segmentation, detection, classification, ...) I've grown quite disillusioned of full time startup work, especially in autonomous vehicles and I'm considering going back to uni for a PhD or changing my line of work to have more free time and a more substantial positive impact on the world/environment.

I'm looking to offer help in computer vision, remote sensing, dataset creation and do volunteer work as well as discuss PhDs and applied ml/computer vision, feel free to message if you want to chat or have opportunities!

👋 Thiên-Anh Nguyen, Declan, Tjomme Dooper, Omiros Pantazis, Sara Beery, Suzanne Stathatos, Dan Morris, Andy Viet Huynh, Kakani Katija, Lucia Gordon, Jason Holmberg (Wild Me), Rita Pucci
Nasrin Montazeri (n_seemorgh@yahoo.com)
2022-09-13 11:25:12

Hello everyone. I'm Nasrin Montazeri from Iran & I'm an electrical engineer in a corporation in the HVAC industry in my hometown. and looking forward to a Ph.D. position to pursue my education. Besides, I'm a semi-professional rock climber and mountain addict. Also, I'm studying french at B1 level. À bientôt :)

👋 Sara Beery, Omiros Pantazis, Declan, Suzanne Stathatos, Dan Morris, Andy Viet Huynh, Lucia Gordon, Jason Holmberg (Wild Me), Rita Pucci
Ditiro Rampate (ditirorampate@gmail.com)
2022-09-13 11:42:44

Hello everyone👋:skintone5:, I am Ditiro from Botswana. I am a volunteer ML engineer with OmdenaAI where we solve global challenges using AI, from climate change to wildlife conservation. I am currently looking for job and graduate(Masters in AI or computer vision) opportunities. SO great great to be here😉

👋 Sara Beery, Declan, Suzanne Stathatos, Dan Morris, Andy Viet Huynh, Lucia Gordon, Jason Holmberg (Wild Me), Jon Van Oast, Omiros Pantazis, Malte Pedersen, Rita Pucci
Viktor Domazetoski (viktor.domazetoski@hotmail.com)
2022-09-13 13:49:11

Hello everyone! I am Viktor and I am from Macedonia, though currently I am studying in Goettingen, Germany. Although I finished my Bachelors in Computer Science and continued a Masters in Statistics I decided that I want to use my skills in AI and Data to help save the environment. This led me to start a second master in Ecosystem Modelling of which I just finished my first year. Currently I am looking at PhD opportunities in the interface between these two fields utilizing fields such as Network Science, Natural Language Processing and Computer Vision. I am very excited to find this community and pleased to meet you all and feel free to reach to chat about anything :D

👋 Declan, Lucia Gordon, Jason Holmberg (Wild Me), Sara Beery, Toryn Schafer, Andy Viet Huynh, Jon Van Oast, Dan Morris, Devis Tuia, Eddie Zhang, Omiros Pantazis, Suzanne Stathatos, Lauren Gillespie, Rita Pucci
G. LeBuhn (lebuhn@sfsu.edu)
2022-09-13 14:59:00

Hello everyone. I'm a plant and insect biologist in San francisco (SFSU). I am interested in using sensors to monitor insects (bees and herbivores) and to track ecosystem services - particularly pollination service. I am hoping that computer vision will help manage and process the image data.

👍 Sara Beery, Declan, Jon Van Oast, Dan Morris, Omiros Pantazis, Michael Bunsen, Jason Holmberg (Wild Me)
👋 Cameron Trotter, Sara Beery, Suzanne Stathatos, Lauren Gillespie, Rita Pucci
🐝 Ando Shah
Michael Bunsen (notbot@gmail.com)
2022-09-14 17:12:25

*Thread Reply:* Hello G! I am working on an autonomous insect monitoring project in a research lab at Mila (Quebec). We are focused on lepidoptera in the moment.

Here is a recent paper that describes some of our current methods and partners: https://www.cell.com/trends/ecology-evolution/fulltext/S0169-5347(22)00134-3

And here is webinar that showcases some of the hardware projects in development: https://youtu.be/2Z8aG7qYAa0

I am currently working remotely from Portland, Oregon and I would love to connect with more folks interested in this topic on the west coast!

Trends in Ecology &amp; Evolution
YouTube
} UK Centre for Ecology & Hydrology (https://www.youtube.com/c/UKCentreforEcologyHydrology)
Valentin Gabeff (valentin.gabeff@protonmail.ch)
2022-09-14 03:23:45

Hello everyone, I'm Valentin from Switzerland. I have just started a PhD in the same EPFL lab as @Thiên-Anh Nguyen, co-supervised by @Devis Tuia's and Alexander Mathis. I will be working on the interaction between wildlife and the environment using CV & DL with a focus on the Swiss Alps (project description).

Looking forward to share thoughts here and hopefully to meet when we have the occasion.

👋 Benjamin Kellenberger, Robin Zbinden, Thiên-Anh Nguyen, Omiros Pantazis, Sara Beery, Declan, Dan Morris, Suzanne Stathatos, Jason Holmberg (Wild Me), Lauren Gillespie
🎉 Robin Zbinden, Sara Beery, Rita Pucci
serge sarkis (sergesarkis7@gmail.com)
2022-09-14 04:03:00

Hello everyone, I'm Serge from Lebanon. I have an MS in mechanical engineering with emphasis on smart materials and soft robotics. I'm currently a research assistant at the American University of Beirut. We are building a reforestation robot and planning to study the biodiversity of local wildlife sanctuaries and national forest using CV and DL. Very excited to join the community!

👋 Omiros Pantazis, Sara Beery, Declan, Dan Morris, Suzanne Stathatos, Lauren Gillespie, Rita Pucci
Graeme Phillipson (graeme.phillipson@bbc.co.uk)
2022-09-14 11:33:08

Hello! I’m Graeme from BBC Research & Development. We’re interested in camera traps and remote cameras for producing natural history television programmes.

👏 Lucia Gordon, Oisin Mac Aodha, Stephanie O'Donnell, Carly Batist, Felipe Parodi, Sara Beery, Dan Morris, Suzanne Stathatos, Jason Holmberg (Wild Me), Cathy Atkinson, Viktor Domazetoski, Lauren Gillespie, Omiros Pantazis, Ștefan Istrate
👋 Cameron Trotter, Valentin Gabeff, Sara Beery, Jason Holmberg (Wild Me), Georgia Atkinson, Adam Noach, Rita Pucci
😎 Jon Van Oast, Sara Beery
🦁 Roni Choudhury
⚡ Roni Choudhury
Graeme Phillipson (graeme.phillipson@bbc.co.uk)
2022-09-14 11:36:01

*Thread Reply:* https://www.bbc.co.uk/rd/blog/2021-04-winterwatch-artificial-intelligence-automated-monitoring

BBC R&amp;D
🙌 Stephanie O'Donnell, Sara Beery, Justin Kay, Emily Lines, Ștefan Istrate
Robert Dawes (robert.dawes@bbc.co.uk)
2022-09-20 06:10:08

*Thread Reply:* Hi, I'm Robert, also from BBC Research & Development and in the same team as Graeme.

🙌 Stephanie O'Donnell, Adam Noach, Cathy Atkinson, Andrew Schulz, Oisin Mac Aodha, Omiros Pantazis, Sara Beery, Gedeon, Ștefan Istrate, Viktor Domazetoski
👋 Georgia Atkinson, Sara Beery, Rita Pucci, Cameron Trotter, Declan
Matthew Judge (matthew.judge@bbc.co.uk)
2022-09-21 06:15:24

*Thread Reply:* Hi everyone! Also joining from BBC R&D, and also in Rob's and Graeme's team 🙂

👋 Georgia Atkinson, Oisin Mac Aodha, Stephanie O'Donnell, Cameron Trotter, Rita Pucci, Andrew Schulz, Sean Nachtrab, Omiros Pantazis, Sara Beery, Catherine Villeneuve, Dan Morris, Jon Van Oast, Eddie Zhang, Jason Holmberg (Wild Me), Declan, Ștefan Istrate
Chinmay Talegaonkar (ctalegaonkar@ucsd.edu)
2022-09-14 11:51:26

Hello everyone, I am Chinmay from India. I am just starting my PhD at UC San Diego in the ECE department. I will be focusing on 3D vision problems, and real world applications of computational imaging methods. I am quite interested in finding applications of my research for Wildlife conversation! Before starting my Ph.D. I worked as a deep learning engineer at a startup in Bay area, where I developed object detection models for industry use cases. I attended @Sara Beery’s talk at ICCP 2022 at Caltech, which brought me to this amazing group! Outside of work, I like hiking and visiting national parks.

👋 Valentin Gabeff, Stephanie O'Donnell, Toryn Schafer, Sara Beery, Dan Morris, Lucia Gordon, Graeme Phillipson, Suzanne Stathatos, Jason Holmberg (Wild Me), Lauren Gillespie, Rita Pucci
Ben Weinstein (benweinstein2010@gmail.com)
2022-09-14 12:47:32

I'm working this week on making/contributing to an open-source python package for aligning raw airborne imagery from UAV with the stitched georeferenced mosaic so that users can run machine learning models on high quality raw imagery and then try to place those detections/classifications into world coordinates. Many of you will have experience in these areas and I'm looking for any opinions/data on workflows. Given that the orthomosaic exists in a 3d or 2.5d space, every pixel in the raw imagery does not have a corresponding world coordinate. The larger the z dimension displacement, the larger the shift in the transformed location (orange box). This is in collaboration with https://github.com/UTokyo-FieldPhenomics-Lab/EasyIDP, which is a package that performs the reverse action, finds the location of raw imagery given a point in the orthophotos. I think this is a really underserved part of the workflow and something you can find many posts about in the assorted photogrammetry tools (pix4d, agisoft, etc), but no detailed workflow and example. All thoughts welcome. See this thread for technical details/code: https://github.com/UTokyo-FieldPhenomics-Lab/EasyIDP/discussions/44#discussioncomment-3579788

Stars
17
Language
Python
👍 Rowan Converse
😍 Suzanne Stathatos, Sara Beery, Rita Pucci
🌳 Sara Beery
👀 Sean Nachtrab
Daniel Davila (daniel.davila@kitware.com)
2022-09-14 15:33:14

*Thread Reply:* One of my colleagues specializes in UAV based 3d vision, and they're working through a lot of these problems on our noaa adapt program. I bet he'd love to talk your ear off about what they've done, if you want me to connect yall. Just let me know! This is outside my wheelhouse though lol

Ben Weinstein (benweinstein2010@gmail.com)
2022-09-14 15:48:39

*Thread Reply:* happy to talk to anyone, it feels like the kind of task that has been solved (its more annoying than it is hard) but just does not exists in a repeatable way. I keep waiting for someone to just appear and be like, oh that's the 'export shape annotations' button, etc. But I've talked to developers from all the software companies and I haven't seen anything that suffices.

Tjomme Dooper (tjomme@fruitpunch.ai)
2022-09-15 05:43:24

*Thread Reply:* Hey Ben, this sounds super relevant to a project we're running at FruitPunch at the moment.

Tjomme Dooper (tjomme@fruitpunch.ai)
2022-09-15 05:44:09
Ethan Shafron (ethan.shafron@gmail.com)
2022-09-16 09:37:19

*Thread Reply:* I've worked on similar tasks where we used template matching with openCV to align overlapping geo-referenced DiMAC images - it's definitely not the fastest or sleekest option, but I think it's relevant. I'm not sure exactly how well it would work for trying to match ortho vs raw images, but it might be worth testing as a relatively simple baseline.

👍 Ben Weinstein
Josh Veitch-Michaelis (j.veitchmichaelis@gmail.com)
2022-09-18 13:38:23

*Thread Reply:* When we did this sort of thing for Mars (matching rover ground imagery to orbit, different satellites, etc). Typically the process was to generate key points (SIFT) in both domains and then run some of robust matching and warping process. Which is probably the "draw the rest of the -- owl" bit... and I'm not sure how flexible the pipeline was for non Martian data...

Probably depends somewhat on expected localisation accuracy? But also if you're doing detection on imagery that was already used to generate the orthomosaic then don't you already have a warping function that's used to project points from each raw image into the ortho? (given the camera intrinsic/extrinsic info)

https://www.sciencedirect.com/science/article/pii/S0019103516303086?via%3Dihub|https://www.sciencedirect.com/science/article/pii/S0019103516303086?via%3Dihub

sciencedirect.com
Ben Koger (benkoger@gmail.com)
2022-09-27 11:23:15

*Thread Reply:* We have a preprint (also in review) that describes our method for doing this (or at least similar) with drone video: https://www.biorxiv.org/content/10.1101/2022.06.30.498251v1 Basically use structure from motion to generate 3D georeferenced landscape models from some of the raw images and then use the resulting calculated camera matrices from those frames to project pixel locations into the georefernced 3D landscape space. We use local features in the raw frames to estimate the correct camera matrix even for frames not used to build the landscape model. The overal code we use is here: https://github.com/benkoger/overhead-video-worked-examples

bioRxiv
Language
Jupyter Notebook
Last updated
6 months ago
👍 Ben Weinstein
Sara Beery (sbeery@caltech.edu)
2022-09-15 13:45:42

NeurIPS workshop that explicitly requests submissions from real-world applications including conservation!

https://twitter.com/yoonholeee/status/1570461511797338113

twitter
} Yoonho Lee (https://twitter.com/yoonholeee/status/1570461511797338113)
❤️ Justin Kay, Oisin Mac Aodha, Mark Goldwater, Suzanne Stathatos, Alan Papalia, Lucia Gordon, Lukas Picek, Eddie Zhang, Jaanak, Dhruv Sheth, Robin Zbinden, Gedeon, Omiros Pantazis, Agnethe Seim Olsen, Fadel, Ando Shah, Rita Pucci
😎 Jon Van Oast, Dhruv Sheth
Sara Beery (sbeery@caltech.edu)
2022-09-15 13:45:58

*Thread Reply:* "This workshop aims to convene a diverse set of domain experts and methods-oriented researchers working on distribution shifts. We are broadly interested in methods, evaluations and benchmarks, and theory for distribution shifts, and we are especially interested in work on distribution shifts that arise naturally in real-world application contexts. Examples of relevant topics include, but are not limited to: • Examples of real-world distribution shifts in various application areas. We especially welcome applications that are not widely discussed in the ML research community, e.g., education, sustainable development, and conservation. We encourage submissions that characterize distribution shifts and their effects in real-world applications; it is not at all necessary to propose a solution that is algorithmically novel. "

Sara Beery (sbeery@caltech.edu)
2022-09-15 13:46:33

*Thread Reply:* • "Benchmarks and evaluations. We especially welcome contributions for subpopulation shifts, as they are underrepresented in current ML benchmarks. We are also interested in evaluation protocols that move beyond the standard assumption of fixed training and test splits -- for which applications would we need to consider other forms of shifts, such as streams of continually-changing data or feedback loops between models and data? "

Kamran Zolfonoon (kzolfonoon@umass.edu)
2022-09-15 16:31:14

Hi everyone! I’m Kamran a MS-CS student at the University of Massachusetts, Amherst. I’ll be working with the US Fish & Wildlife Service this semester to build a computer vision pipeline that collects data from Bald Eagle nest cameras. The hope is to use this data to quantify breeding adult Bald Eagle nest attendance patterns and provisioning behavior.

Currently working on turning unlabeled images into a dataset. If anyone has experience using cloud labeling services with trap camera (or similar) images I’d love to connect and learn from your experience!

👋 Suzanne Stathatos, Sara Beery, Roni Choudhury, Declan, Dan Morris, Omiros Pantazis, Jaanak, Rita Pucci
Mel Guo (melguo236@gmail.com)
2022-09-21 15:37:36

*Thread Reply:* Hey Kamran! Thanks for your intro, your work with USFWS sounds so interesting! Is there a webpage for your research project on data collection from bald eagle nest cameras? Super interested in learning more as a fellow MS-CS student interested in birds!

Vinicius Amaral (amaralvin7@gmail.com)
2022-09-16 00:51:25

Hi all! I'm a PhD student/oceanographer at the University of California Santa Cruz and I use CV to classify images of particles in the ocean. I'm especially interested in topics such as class imbalance and domain shift. Happy to be here and learn from you all.

👋 Suzanne Stathatos, Jon Van Oast, Robin Zbinden, Sara Beery, Déva Sou, Lucia Gordon, Roni Choudhury, Declan, Dan Morris, Omiros Pantazis, Jaanak, Rita Pucci
Rebecca (rebeccayap92@uchicago.edu)
2022-09-16 08:44:00

Hi all! I thought I would introduce myself. I’m a graduate student at the University of Chicago Harris School of Public Policy. I’m born and bred in Singapore. I am highly interested in conservation and have a running idea that I think will run a series of projects. Very excited to meet and interact with all of you at some point!

👋 Sara Beery, Roni Choudhury, Declan, Dan Morris, Suzanne Stathatos, Omiros Pantazis, Jaanak, Rita Pucci
Roni Choudhury (roni.choudhury@kitware.com)
2022-09-16 10:03:30

*Thread Reply:* fellow u of c graduate here 👋

👋 Rebecca
Ethan Shafron (ethan.shafron@gmail.com)
2022-09-16 10:05:07

Hi everyone, I figured I would introduce myself as well - I'm a graduate student and staff research specialist at University of Montana, where I work for the Spatial Analysis Lab and am a student in the Global Climate and Ecology Lab. My background is in imaging spectroscopy, computational ecology, and remote sensing, and my graduate work is focused on understanding where, why, and when Carbon allocation towards growth in trees becomes decoupled from primary productivity. This is a bit outside the realm of Computer Vision, but I think there are a lot of leverage points that computer vision could help with - there's still quite a bit of mechanistic uncertainty in the terrestrial carbon cycle, and a ton of that is driven by bottlenecks in data acquisition and processing of organism-scale data (think soils, wood, leaves, roots). How can we close that gap?! Maybe CV is part of it? Maybe not?

I'd be curious to touch base with any other folks working more in the vegetation/biogeochemistry/paleoclimate/remote sensing space - I think the use cases of CV in this space are a bit more narrow, but could go a long way in improving our understanding of biogeochemical processes as the climate continues to change.

👋 Roni Choudhury, Sara Beery, Justin Kay, Dan Morris, Carly Batist, Rowan Converse, Suzanne Stathatos, Benjamin Kellenberger, Declan, Jason Holmberg (Wild Me), Omiros Pantazis, Jaanak, Adam Noach, Heather, Lauren Gillespie, Rita Pucci, Anton Alvarez
👍 Roni Choudhury, Sara Beery, Carly Batist, Jason Holmberg (Wild Me)
Declan (declan.pizzino@consbio.org)
2022-09-16 15:59:59

*Thread Reply:* While CV seems to be dominant in this slack, there is plenty of space for using other AI solutions for conservation applications! I'm part of a small, primarily RS-focused, team of geospatial analysts at a sm/med conservation non-profit. Happy to connect with you to chat about the work we do at CBI

🎉 Carl Boettiger, Sara Beery
Rebecca (rebeccayap92@uchicago.edu)
2022-09-16 16:43:58

*Thread Reply:* I’m also beginning to see a lot of applications that ML (and I guess CV) can do in ecology, and hoping to learn more - about both ecology and ML in tandem (I consider myself a newbie in both).

Heather (h_peacock@ducks.ca)
2022-09-19 13:43:44

*Thread Reply:* Hi @Ethan Shafron, I am also interested in using ML and RS for carbon related questions (and the ecological and climate change implications), mostly storage atm, would be interested to hear more about your research and if you have any thoughts on modelling (quantifying) carbon storage in terrestrial ecosystems. Thanks!

Ethan Shafron (ethan.shafron@gmail.com)
2022-09-23 13:02:02

*Thread Reply:* Hi @Declan - I've actually heard a fair bit about CBI! My partner works for the California Native Plant Society, and specifically works with EEMS - would love to hear about what you all are doing on the RS side of things though. Ofc there are plenty of non-CV ML solutions, I'm just off the heels of Sara's CV for ecology workshop, so that's what's been occupying my mind these days :)

@Rebecca - welcome! It's always cool to see "newbies" in spaces like these - if you're here then you're probably curious about a lot of different things, which is basically what drives this whole sub-field!

@Heather - There's a lot of stuff on Carbon modelling these days and it's hard to sift through it all to find what's relevant at varying spatial/temporal scales. Happy to chat about C fluxes and storage and hear more about what you're working on!

😍 Declan
Rebecca (rebeccayap92@uchicago.edu)
2022-09-30 11:19:59

*Thread Reply:* Hi Ethan! I am very curious about this side of things. Do you think I would be able to PM to chat more with you? I have a few questions about how to grow and position myself to be on the conservation side of things, and would love to chat about an idea that has been brewing.

Ethan Shafron (ethan.shafron@gmail.com)
2022-09-30 11:35:39

*Thread Reply:* absolutely!

Erik Peterson (omahaesp@gmail.com)
2022-09-16 22:17:53

Hello all — bit different here from a lot of the academic perspectives. I'm a product manager, former software engineer, in big enterprise B2B, but I do a lot of day to day data analysis and increasingly ML. Looking to help with any open source positioning or some entrepreneurial approaches to private sector stuff

👍 Gedeon, Dan Morris, Omiros Pantazis, Sara Beery
👋 Suzanne Stathatos, Jaanak, Adam Noach, Rita Pucci
Gedeon (gedeonmuhawenayo@gmail.com)
2022-09-17 06:49:18

Hi all, I am a Machine Learning Research Engineer at Rwanda Space Agency. I use CV & Geospatial data for conservation. Happy to be here.

👋 Gabriel Tseng, Dan Morris, Omiros Pantazis, Lucia Gordon, Sara Beery, Jason Holmberg (Wild Me), Declan, Suzanne Stathatos, Jaanak, nyakundi lamech, Adam Noach, Gedeon, Rita Pucci
Josh Seltzer (jyseltz@gmail.com)
2022-09-17 12:46:02

Hi everyone! It's nice to find this community and to see a few familiar people here 👋

To quickly introduce myself -- I'm passionate about providing technical solutions to help empower conservation projects with local and indigenous knowledge, and I am super interested in combining exciting advances in AI (especially NLP / computer vision / machine listening) with a variety of hardware systems that can be used for ecological research and biodiversity conservation. Besides biodiversity, I also see a lot of opportunities for AI for language conservation (but have been struggling to find other people interested in that space).

I also run a tech company (nexxt.in) and am living in Panama at the moment (if anyone is nearby feel free to reach out 😉), and might be slightly obsessed with all things monkey related 🐒 if anyone wants to chat about that!

👋 Sara Beery, Jason Holmberg (Wild Me), Declan, Suzanne Stathatos, Jaagat P., Jaanak, Adam Noach, Dan Morris, Heather, Rita Pucci
🙌 Gedeon
👋:skin_tone_3: Pen-Yuan Hsing
Heather (h_peacock@ducks.ca)
2022-09-19 13:40:04

*Thread Reply:* Hi Josh! Fellow monkey enthusiast here! I'm finishing my PhD on global primate biogeography and conservation. I suppose the first question is, what is your favourite monkey?

Josh Seltzer (jyseltz@gmail.com)
2022-09-20 08:53:23

*Thread Reply:* Hey @Heather, fellow Canadian as well i suppose (guessing from your email)? And wow that sounds really cool!! I would love to hear more about your studies 🙂 and I love squirrel monkeys because of how mischievous they are for sure haha, but I am fascinated by most of the platyrrhines

Ashley Chang (she/her) (ash.chang0921@gmail.com)
2022-09-17 22:54:28

Hello! I am Ashley, and I am a senior at California High School (San Ramon, CA). I love anything Data Science related, especially ML and AI! I am very passionate about many environmental and social justice topics, in which I utilize ML and AI to help identify and potentially find solutions for them. (I am a huge advocate for problem-solving)! Furthermore, I joined this Slack due to its diverse community in AI! :)

Although I am not in college yet, I plan to pursue research once I get there! I love learning and growing, and I love to teach others. Aside from academics and any STEM-related passions, I love to write, go on hikes, and watch any cyberpunk-related shows!

I currently hold an internship with NASA, in which I utilize CVAT (Computer Vision Annotation Tool) to identify biofilm in caves. Then, the identifications (i.e. “annotations”) are integrated into a ML and AI algorithm to ultimately be used on the NeBula robot. Hopefully in the distant future, the NeBula robot will be sent to Mars to potentially identify life in Martian volcanic caves!

I am also the president of the Data Science club at my school, in which I currently introduce AI and ML to my peers. I would love to expand my knowledge in AI onto projects, so if anyone is interested in talking about AI, ML, or something completely unrelated, I would be more than happy to chat about it! Super excited to be a part of this community. I hope you all have a great day. :)

👋 Sara Beery, Suzanne Stathatos, Eddie Zhang, Adam Noach, nyakundi lamech, Josh Seltzer, Dan Morris, Jason Holmberg (Wild Me), Heather, Rita Pucci
😎 Jon Van Oast, Jason Holmberg (Wild Me)
👋:skin_tone_3: Pen-Yuan Hsing
Josh Seltzer (jyseltz@gmail.com)
2022-09-18 09:56:08

*Thread Reply:* Super cool that you've done so much while still in high school, especially the NASA project that sounds amazing!

❤️ Ashley Chang (she/her)
Valentin Lucet (valentin.lucet@gmail.com)
2022-09-18 17:11:22

*Thread Reply:* Yes it is extremely impressive!

❤️ Ashley Chang (she/her)
Adam Noach (amn55@cam.ac.uk)
2022-09-18 06:02:46

Hi everyone! In two weeks I’ll be starting a PhD in computational ecology at the University of Cambridge. The project is focused on peat-forming wet woodlands - will start by pitching new plots, estimating C stocks using some combination of drone ALS+photogrammetry, TLS and other ground measurements, and then looking at modeling the C cycle/ecosystem dynamics more generally + potential for carbon storage. There’s also scope for looking at how these ecosystems respond to sea-level rise and their flood mitigating potential using existing satellite data.

I’m largely here thanks to having come across the ID Trees project and @Ben Weinstein ’s work on tree crown segmentation (thank you!) - which I still find very interesting - and which inspired my master’s project last year. I’m delighted to be part of this community and especially keen on finding out what any other forest ecologists in here are up to!

🙌 Josh Seltzer, Ashley Chang (she/her), Dan Morris, Sara Beery, Eddie Zhang, Suzanne Stathatos, Rita Pucci
Rebecca (rebeccayap92@uchicago.edu)
2022-09-18 09:01:30

*Thread Reply:* Very interesting research! I also looked at the ID Trees project - super intriguing to me since I want to look into trees and forests!

🌳 Adam Noach
Rebecca (rebeccayap92@uchicago.edu)
2022-09-18 09:02:37

*Thread Reply:* I would love to read your master’s project if you are willing to share, @Adam Noach!

Adam Noach (amn55@cam.ac.uk)
2022-09-18 12:13:02

*Thread Reply:* I'm in that awkward middle ground of extending it into a paper - but when it's cooked up ( & if it's appropriate and welcome) I'd be happy to share it here. I can see it's already been mentioned but DeepForest will be up your alley if ID Trees was!

Rebecca (rebeccayap92@uchicago.edu)
2022-09-18 20:57:59

*Thread Reply:* Got it! i will look at that.

👍 Adam Noach
Rebecca (rebeccayap92@uchicago.edu)
2022-09-18 20:58:24

*Thread Reply:* Hmm when I have mastered Python in entirety, I will play around with this package.

Cathy Atkinson (cathy.atkinson@highlandsrewilding.co.uk)
2022-09-19 05:24:02

Hi all. I'm Cathy from the UK. I work for Highlands Rewilding where we are using all sorts of remote sensing, camera traps, acoustic sensors etc to monitor biodiversity and carbon storage (forestry and peat restoration) on our rewilding sites in Scotland. 🌳🌲🦌🐦🦊🐁🐗🌱

🌳 Tjomme Dooper, Rebecca, Dan Morris, Lucia Gordon, Marconi Campos, Sara Beery, Ashley Chang (she/her), Adam Noach
👋 Suzanne Stathatos, Rita Pucci, Declan
👋:skin_tone_5: Ando Shah
Heather (h_peacock@ducks.ca)
2022-09-19 13:37:38

Hi! I'm Heather from Canada, I am finishing my PhD in Geography, using GIS to map and quantify global primate habitat loss and fragmentation. I also work at DUC as a GIS specialist. I am very interested in ML models for conservation, and models for regional scale landscape ecological research. Hope to learn a lot from this community!

👏 Dan Morris, Sara Beery, Suzanne Stathatos, Declan, Ashley Chang (she/her), Lauren Gillespie, Olivier Gimenez, Adam Noach, Rita Pucci
Declan (declan.pizzino@consbio.org)
2022-09-19 14:08:42

*Thread Reply:* Hi Heather! Glad to see you here 😄

Ben Weinstein (benweinstein2010@gmail.com)
2022-09-19 14:09:31

Hi everyone. Can we do a big round up on current drone's that teams are using. We are running out of time to update from our DJI's to get up to date with the federal drone list: https://www.diu.mil/blue-uas-cleared-list. A general desire is • Quadcopter for heavy wind conditions landing on a boat (We have fun videos of staff diving into the alligator heavy everglades to recover vertical take off drones) • Under 30k • PPK accuracy for geotagging each image • On the US federal blue list We own several DJI Inspires and a Wingtra One (Gen I).

diu.mil
Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2022-09-20 04:33:28

*Thread Reply:* Hey ben - this might be a question to also drop in to the WILDLABS dicusssions and tag to our drones group so you can get feedback from the wider conservation tech community. https://wildlabs.net/

wildlabs.net
Devis Tuia (devis.tuia@epfl.ch)
2022-09-20 08:22:40

For all the new people around: we have severla other channels that are super interesting, dont forget to browse and enter those (e.g. #jobs, #newpapers, #upcomingevents, …)

🙌 Sara Beery, Rita Pucci
🙏 Alessandra Sellini, Carly Batist, Robin Zbinden, Yseult Hb, Vijay Karthick, Adam Noach
Pietro Perona (perona@caltech.edu)
2022-09-20 09:53:45

<#C043H8TNKFT|camera-traps-hardware> #camera_traps #general -- I would like to hear recommendations on camera traps. I am hanging out in NE Italy and would like to place a few camera traps around to see what's moving at night and create a little dataset. Pushing sensor resolution has little interest for me. I would like instead to find cameras that have low latency (so that I do not miss the animals that trigger the camera) and good infrared lighting (most of my observations will happen at night). Thank you.

Andrew Schulz (akschulz@gatech.edu)
2022-09-20 10:42:46

*Thread Reply:* Hi @Pietro Perona there is also a #camera_traps channel as well where this could be good to ask!

Pietro Perona (perona@caltech.edu)
2022-09-20 11:09:07

*Thread Reply:* thanks!

Tiziana Gelmi Candusso (tiziana.gelmi@gmail.com)
2022-09-20 12:59:22

*Thread Reply:* We have great nightime pictures of quick animals with bushnell trophy no glow. Seems the new version is even better than the ones we bought in 2020 https://www.bushnell.com/trail-cameras/standard-trail-cameras/core-s-4k-no-glow-trail-camera/PB-119949C.html

Bushnell
Tiziana Gelmi Candusso (tiziana.gelmi@gmail.com)
2022-09-20 12:59:36

*Thread Reply:* Looking forward to those NE italian night shots!

Pietro Perona (perona@caltech.edu)
2022-09-21 10:48:45

*Thread Reply:* Thank you Tiziana!

Majid Mirmehdi (m.mirmehdi@bristol.ac.uk)
2022-09-21 15:06:48

Hi It might be helpful if there was a dedicated channel for announcing datasets?

✔️ Jon Van Oast, Sara Beery, Carl Boettiger
Ben Weinstein (benweinstein2010@gmail.com)
2022-09-21 15:40:08

*Thread Reply:* why not under #new_papers ?

👍 Sara Beery, Carly Batist
Majid Mirmehdi (m.mirmehdi@bristol.ac.uk)
2022-09-21 18:48:38

*Thread Reply:* It could be under new papers…it’s just that not all new papers will introduce new datasets. Would be much easier to have a channel with only links to datasets old and new.

Sara Beery (sbeery@caltech.edu)
2022-09-22 15:10:50

*Thread Reply:* You're free to make any channels you think are useful!

👍 Majid Mirmehdi
Olof Mogren (olof.mogren@ri.se)
2022-09-22 11:05:37

Hi everyone! I'm a researcher at RISE, the Swedish research institute. I have a PhD in computer science, and am now heading the deep learning research part of our center for applied AI. We have some exciting projects related to biodiversity and modelling of earth and water. I was just invited to this Slack by @Devis Tuia, whom I just had the pleasure to have invited to our seminar series, RISE Learning Machines, and who just gave a great talk there. We are looking for master's students for a couple of projects this spring, including Deep learning for detection of coffee berry disease and Aerial View Goal Localization with Reinforcement Learning. Great to be here!

RISE
RISE
RISE
👀 Stephanie O'Donnell, Aleksis Pirinen
🙌 Stephanie O'Donnell, Devis Tuia, Adam Noach, Edvin Listo Zec, Dan Morris, Robin Zbinden, Josh Seltzer, Suzanne Stathatos, Sara Beery, Lily Xu, Aleksis Pirinen, Malte Pedersen, Jeff Reed
😎 Jon Van Oast, Sara Beery, Edvin Listo Zec
🙏 Michael Bunsen, Sara Beery
Abhinav Sharma (abhinav.sharma@iiitg.ac.in)
2022-09-22 13:37:32

Dear All, I am a senior-year undergraduate student in the Computer Science and Engineering Department at the . In the past I have research experience of working at Georgia Institute of Technology on Graph Neural Networks and Approximate Computing and recently I have worked on Reinforcement Learning and Causal Inference for design space exploration in Facebook's Augmented Reality System Investigator. Looking forward to contributing in the interdisciplinary fields. Feel free to connect with me on LinkedIn.

👋 Suzanne Stathatos, Sara Beery, Carly Batist, Chinmay Talegaonkar, Jason Holmberg (Wild Me), Stephanie O'Donnell, Lily Xu, Adam Noach, Jeff Reed
Dan Morris (agentmorris@gmail.com)
2022-09-22 17:09:35

Really neat to see Esri releasing a bunch of pretrained models relevant to this Slack, including models based on data that folks on this Slack have helped make available:

https://www.esri.com/arcgis-blog/products/arcgis/imagery/new-pretrained-deep-learning-models-sept-2022/

That includes a tree detection model (based on @Ben Weinstein's NEON tree dataset), a bird detection model (based on @Benjamin Kellenberger's Aerial Seabirds West Africa dataset), an elephant detection model (based on the Aerial Elephant Dataset), and a bunch of other interesting ones that aren't quite so on-the-nose conservation-focused, but are relevant and interesting.

ArcGIS Blog
🙌 Stephanie O'Donnell, Riccardo de Lutio, Stefan Schneider, Catherine Villeneuve, Justin Kay, Sara Beery, Carly Batist, Jason Holmberg (Wild Me), Suzanne Stathatos, Jake Wall, Kasirat, Josh Seltzer, Fadel, Jeff Reed
😎 Jon Van Oast, Rowan Converse, Sara Beery, Jason Holmberg (Wild Me)
👍 Monty Ammar
Ben Weinstein (benweinstein2010@gmail.com)
2022-09-22 22:41:41

*Thread Reply:* this is cool, but also kind of terrifying, there is alot of improvement to be made on deepforest and a company with ESRI's capability could really make it better. I get nervous when it gets just passed around as a 'tree detector' without all the docs associated with deepforest about all the time it doesn't work/could be better.

🤔 Josh Seltzer
💯 Declan, Emily Lines
Ben Weinstein (benweinstein2010@gmail.com)
2022-09-22 23:00:13

Does anyone know the current url (@Sara Beery, @Pietro Perona) for the pasadena tree dataset at visipedia? Pasadena Urban Trees "This includes dense aerial and street view imagery for 30,000 trees labeled with geo-location and tree species from Pasadena, California." [URL] [Paper] https://visipedia.github.io/datasets.html

Visipedia
Sara Beery (sbeery@caltech.edu)
2022-09-22 23:23:37

*Thread Reply:* I think @Pietro Perona is the best bet here

Pietro Perona (perona@caltech.edu)
2022-10-17 04:33:28

*Thread Reply:* We are looking for the data - usual problem w switching servers. Please also contact Jan Wegner at ETH

👍 Ben Weinstein
Brad Pickens (bradley_pickens@fws.gov)
2022-09-23 09:57:44

I'm curious if there is interest in this ESA call, as it could be bring together both fish and wildlife experts in deep learning: Ecological Society of America invites proposals for Symposia, Organized Oral Sessions to be hosted in Portland, OR, Aug 6-11, 2023, https://www.esa.org/portland2023/session-types/submit-a-proposal-for-an-invited-paper-session/?utmsource=Informz&utmmedium=Email&utmcampaign=ESA&zs=NJIml&zl=QGNJ2|https://www.esa.org/portland2023/session-types/submit-a-proposal-for-an-invited-pa[…]e=Informz&utmmedium=Email&utmcampaign=ESA&zs=NJIml&_zl=QGNJ2

😎 Jason Holmberg (Wild Me), Sara Beery
Dan Morris (agentmorris@gmail.com)
2022-09-23 11:18:30

*Thread Reply:* Oh I'm super-interested. I had no idea ESA was in Portland next year. My dream is to get all the benefits of in-person conferences without actually going anywhere, which I'll roughly define as "without leaving Seahawks territory", and you're telling me I can have TWS in Spokane followed by ESA in Portland, and meet lots of conservation folks, and never really have to go anywhere? Sign me up!

👍 Kakani Katija, Michael Bunsen, Sara Beery
❤️ Michael Procko, Sara Beery, Carl Boettiger
Ben Weinstein (benweinstein2010@gmail.com)
2022-09-23 13:43:09

*Thread Reply:* I live in Portland and will submitting a proposal, I welcome everyone to join me. I'll post it here. I'll probably host a dinner too.

👍 Sara Beery, Casey Youngflesh, Carl Boettiger
Ben Weinstein (benweinstein2010@gmail.com)
2022-09-23 13:44:18

*Thread Reply:* Something like, "Automated ecological monitoring using Computer Vision: Where are we going? How do we get there?"

Brad Pickens (bradley_pickens@fws.gov)
2022-09-23 15:17:36

*Thread Reply:* Thanks Dan and Ben! I wonder if we could get a couple of sessions proposed?! In addition to Ben's broad topic area, maybe something on Application of Deep Learning in Wildlife and Fisheries? Organized oral sessions are 6 talks/Symposia are 4 talks (longer; broad topics)/Inspire Sessions are 6-10 talks. Ben- what category are you thinking? ...Conference theme is "for all ecologists" - meaning both academics and us applied folks 😀

👍 Yseult Hb
Michael Bunsen (notbot@gmail.com)
2022-09-23 15:28:34

*Thread Reply:* I am also in Portland and would love to attend if not present as well. I could contribute some slides on the state of automated insect monitoring, or perhaps do a shorter "Inspire Session" on that topic. I would love to rally support for getting a network of insect monitoring stations setup in the USA, or at least up and down the Pacific Northwest

Emily Lines (erl27@cam.ac.uk)
2022-09-23 17:24:35

*Thread Reply:* @Ben Weinstein that sounds like a fantastic proposal - I'd be interested!

Sara Beery (sbeery@caltech.edu)
2022-09-24 08:37:34

*Thread Reply:* I'm also definitely interested! I've never been to an in-person ESA, and I share @Dan Morris's enthusiasm for meeting lots of ecologists and conservationists without needing to go too far!

Casey Youngflesh (caseyyoungflesh@gmail.com)
2022-09-26 11:17:46

*Thread Reply:* Yes to all of this! @Ben Weinstein I’d be game

Atul Ingle (ingle@uwalumni.com)
2022-09-26 12:29:10

*Thread Reply:* For those planning to attend, I'd also like to invite you to visit my new computational imaging lab at Portland State University in downtown Portland!

(One of my research thrusts is in high speed/low light image sensing under resource constrained scenarios. I believe this can have applications in ecological monitoring esp situations where conventional vision sensors+algorithms fail.)

👀 Emily Lines, Ethan Shafron
❤️ Sara Beery, Carl Boettiger, Barbie D
🙌 Michael Bunsen
Brad Pickens (bradley_pickens@fws.gov)
2022-09-30 10:42:14

*Thread Reply:* For background, it is the largest ecology meeting in the US -- precovid- upwards of 4,000 people regularly attend. ESA is also one of the more high-tech ecology conferences and a good place to see the next big ideas in ecology as a whole (overarching plants, insects, wildlife, fisheries, landscape ecology, land cover mapping). Some of these topics can be rare at other conferences. Looks like we should have interest for at least 1-2 sessions...

👍 Michael Bunsen
Michael Bunsen (notbot@gmail.com)
2023-01-23 19:51:20

*Thread Reply:* Hi all! I'd like to keep track of this thread related to the ESA conference in Portland this year, and everyone in the PNW who are generally interested in staying in touch! The messages in this Slack workspace are suppose to expire in 90 days. I've created a new channel <#C04L722GH26|region-pnw>, which doesn't solve the 90 days problem, but might help us stay familiar with each other.

Ben Weinstein (benweinstein2010@gmail.com)
2023-01-23 20:14:38

*Thread Reply:* Our session (@Dan Morris, @Sara Beery, @Emily Lines, @Tessa Rhinehart, @Kakani Katija), "The future of ecological monitoring is collaboration with Artificial Intelligence", was accepted.

🙌 Sara Beery, Michael Bunsen, Kakani Katija, Emily Lines, Tessa Rhinehart
Michael Bunsen (notbot@gmail.com)
2023-01-23 20:21:03

*Thread Reply:* Oh fantastic! I am part of another session proposal focused on insect monitoring. Still waiting to hear back, but I look forward to the event regardless. Perhaps an AI for Conservation happy hour is in order?

❤️ Tessa Rhinehart
👍 Casey Youngflesh, Kakani Katija
Eric Colson (ecolson@gmail.com)
2022-09-26 11:45:12

Hi All, so happy to be joining this group. I am a long-time leader of data science teams in Industry (Netflix, Stitch Fix, etc). Looking to see where i can help wrt to environment/conservation.

👋 Josh Seltzer, Stephanie O'Donnell, Declan, Dan Morris, Felipe Parodi, Carly Batist, Lucia Gordon, Ben Weinstein, Jason Holmberg (Wild Me), Marconi Campos, Jaanak, Abhay, Peter Bull, Casey Youngflesh, Andrew Schulz, Adam Noach, Sara Beery, Carl Boettiger, Lily Xu
🎉 Jon Van Oast, Jason Holmberg (Wild Me), Olivier Gimenez, Marconi Campos, Peter Bull, Sara Beery
Abhay (abhaykash12@gmail.com)
2022-09-26 17:07:21

*Thread Reply:* Hi Eric! Nice to see you here! (I was a DS at Sfix)

👋 Eric Colson
Carl Boettiger (cboettig@berkeley.edu)
2022-09-27 22:34:47

*Thread Reply:* Hey Eric, great to see you here!

👋 Eric Colson
Abhay (abhaykash12@gmail.com)
2022-09-26 17:21:42

And since Eric is here, I must say that a lot of my thinking around making life easy for downstream users by building horizontals is shaped by what he and his team had set up at Sfix 🙂

🙌 Eric Colson
Kakani Katija (kakani@mbari.org)
2022-09-27 15:58:08

Excited to share with the community our first peer-reviewed FathomNet publication in Scientific Reports: https://t.co/G1gLUdNDde. FathomNet is a multi-institutional multi-individual effort to build a global labeled image database for underwater life (and other objects). Seeded with data from MBARI, NOAA, and National Geographic Society, we look forward to data contributions and expertise from a broad user community. Check it out, share it with your networks. FathomNet can be accessed at www.FathomNet.org, follow us on Twitter @fathomnet, and we have a python API at www.GitHub.com/FathomNet. Looking forward to receiving your feedback!

Nature
🎉 Jon Van Oast, Peter Bull, Ben Weinstein, Michael Procko, Carly Batist, Oisin Mac Aodha, Abhay, Dan Morris, Avi Sundaresan, Subhransu Maji, Marconi Campos, Josh Seltzer, Andrew Schulz, Carl Boettiger, Devis Tuia, John Martinsson, Riccardo de Lutio, Lukas Picek, Viktor Domazetoski, Timm Haucke, Yseult Hb, Toryn Schafer, Eddie Zhang, Rose Hendrix, Justine Boulent
👀 Michael Procko, Sara Beery, Nico Lang, Aleksis Pirinen, Rita Pucci
🦀 Peter Bull, Sara Beery, Carl Boettiger, Aleksis Pirinen, Lukas Picek, Levi Cai
Ben Weinstein (benweinstein2010@gmail.com)
2022-09-27 16:07:42

*Thread Reply:* does fathom net come with a detector in the python client. Like if I have an image can I go import fathomnet m = fathomnet.model() predictions = m.predict_image("my_image") predictions["class"] &lt;awesome_sea_snake&gt;

Kakani Katija (kakani@mbari.org)
2022-09-27 16:16:10

*Thread Reply:* No it doesn't. You can check out our model zoo at https://github.com/fathomnet/models and more details on the python api are at https://github.com/fathomnet/fathomnet-py

Daniel Davila (daniel.davila@kitware.com)
2022-09-27 18:01:31

*Thread Reply:* Congrats! Super exciting

🎉 Kakani Katija
Dan Morris (agentmorris@gmail.com)
2022-09-27 20:07:34

*Thread Reply:* The model zoo is very cool! I have no particular reason to want to detect benthic organisms, but who am I to turn away a well-documented model zoo? So FWIW I was able to download and run the "MBARI Monterey Bay Benthic" model, it was very straightforward.

👍 Kakani Katija
Kakani Katija (kakani@mbari.org)
2022-09-27 22:02:09

*Thread Reply:* Awesome. Would love to get your input @Dan Morris on how we can make it better (e.g, standardization, etc).

Dan Morris (agentmorris@gmail.com)
2022-09-27 22:06:55

*Thread Reply:* My only off-the-cuff recommendation would be to include images on the model zoo README that conveys "this is the gestalt of the training data for each model"... that makes it a lot easier for a new user to judge whether the angle/color/murkiness of their images is close enough to the domain of the model to expect reasonable results, and IMO no amount of verbal description can capture that. But once a user decides "yes, this model is reasonably close to my data", the models look well-documented.

👍 Kakani Katija, Carl Boettiger, Justin Kay
Kakani Katija (kakani@mbari.org)
2022-09-27 22:19:34

*Thread Reply:* Yeah, we’ll have to think carefully on that. Totally agree.

Ben Weinstein (benweinstein2010@gmail.com)
2022-09-27 23:03:32

*Thread Reply:* how close are we to higher order taxonomy detection for these type of images. Fish/Jellyfish/Octopus/Crab. How deep the taxonomy can we go?

Ben Weinstein (benweinstein2010@gmail.com)
2022-09-27 23:04:59

*Thread Reply:* I am imagining that if there was some live underwater feed and we wanted to partition the images to the relevant annotators/experts. When you see a seastar send it to seastardude@gmail.com, etc.

Devis Tuia (devis.tuia@epfl.ch)
2022-09-28 03:03:31

*Thread Reply:* This is really cool! We are quite busy with underwater imagery here at ECEO, I will definitely share that with my students!

👍 Kakani Katija
Kakani Katija (kakani@mbari.org)
2022-09-28 06:10:20

*Thread Reply:* @Ben Weinstein, check out the model zoo and the MBARI benthic object detector. We're already pretty close for high-level morphological classes (of course could be better); doing the same thing for midwater animals now too. The FathomNet taxonomy goes to species. Yep totally! The community mods (eg notifications, etc) are currently underway for FathomNet. Should be released by Spring. 🤞:skintone4:

Levi Cai (lcai@whoi.edu)
2022-09-28 13:57:25

*Thread Reply:* I've been building some higher order marine organism detectors that we're using on AUVs in the field that are "working" to some degree, partially using some of the fathomnet dataset. Happy to share/chat.

❤️ Justin Kay
Kakani Katija (kakani@mbari.org)
2022-09-29 10:39:38

*Thread Reply:* Great! @Levi Cai, you should contribute your models to the FathomNet Model Zoo!!

Stars
5
Last updated
2 months ago
❤️ Justin Kay
Sara Beery (sbeery@caltech.edu)
2022-09-28 10:05:09

For all interested, the final session of the CameraTrap Ecology Meets AI Workshop is starting now! To continue the conversation, join the <#C044MCWL1HP|camtrapai-workshops> channel 🙂

https://camtrapai.github.io/

camtrapai.github.io
👍 Aleksis Pirinen, Yseult Hb, Ethan Shafron, Otto Brookes, Emily Dorne, Emerson de Lemmus, Omiros Pantazis, Eddie Zhang, Jaanak
Kangyu Zheng (zkysfls@gmail.com)
2022-09-28 21:23:19

*Thread Reply:* Hi, I want to know are there any recordings for these talks?

Sara Beery (sbeery@caltech.edu)
2022-09-29 08:40:04

*Thread Reply:* @Tilo Burghardt?

Tilo Burghardt (tb2935@bristol.ac.uk)
2022-09-29 14:16:45

*Thread Reply:* Dear @Kangyu Zheng, all workshop slides from all speakers will be up at the weekend on the website. We did not record this 1st workshop to avoid any GDPR issues, keep it very low key and allow for just a single public link for people to join. The next edition of the workshop will provide recordings and hopefully full proceedings. Best, Tilo

👍 Gedeon
Kangyu Zheng (zkysfls@gmail.com)
2022-09-29 14:47:49

*Thread Reply:* Thanks for the reply!

Luke Sheneman (sheneman@uidaho.edu)
2022-09-29 10:56:36

For anybody interested, I've developed a working edge-AI prototype that can detect and classify species (25 classes) and transmit encoded results via an Iridium satellite modem to our servers at the University of Idaho. It's based on an NVIDIA Jetson Nano and as part of its workflow runs both MegaDetector v5 and a bespoke model I trained (using YOLOv5m) for doing species-level classification for our taxa of interest. Results so far are super encouraging. Will be sharing code, design, and pre-trained model(s) soon. Works off Li-Ion battery and solar, so hopefully can be deployed in remote areas indefinitely. Field tests are about to start. Many thanks to @Dan Morris and Idaho Fish and Game for pointers and training data!

😍 Suzanne Stathatos, Subhransu Maji, Josh Seltzer, Sara Beery, Luke Sheneman, Carly Batist, Jason Parham, Valentin Lucet, Alan Papalia, Jason Holmberg (Wild Me), Eric Colson, Timm Haucke, Talia Speaker, Jaanak, Elizabeth Bondi, Mitch Fennell, Eddie Zhang, Jeff Reed, Emilio Luz-Ricca, Ando Shah, Nick Giampietro, Carl Boettiger, Oscar Schafer
💯 Eelke, Jason Parham, Swayam Thakkar, Aleksis Pirinen, Timm Haucke
👍 Justin Kay, Atul Ingle, Jason Parham, Rowan Converse, Dan Morris, Leonardo Viotti, Aleksis Pirinen, Timm Haucke, Gedeon, Howard Windsor, Prabath Gunawardane
😎 Jon Van Oast, Timm Haucke, Carl Boettiger
🙌 Valentin Gabeff, Timm Haucke, Ștefan Istrate, Yseult Hb, Kai Waddington, Jacob Kamminga, Nick Giampietro
👀 Sean P. Rogers
Sara Beery (sbeery@caltech.edu)
2022-09-29 11:04:48

*Thread Reply:* This is so cool 😎

Carly Batist (cbatist@gradcenter.cuny.edu)
2022-09-29 11:17:46

*Thread Reply:* Out of curiosity, what species does the model cover? Also, do you know how quick the on-board processing is? Like how many seconds/image type thing

Luke Sheneman (sheneman@uidaho.edu)
2022-09-29 11:33:52

*Thread Reply:* @Carly Batist we are training the classifier on deer, elk, moose, domestic livestock (cattle, sheep), wolves, coyotes, mountain lion, bear, rabbits, humans, vehicles, and a smattering of other things (domestic dogs, badgers, bobcat, wild turkey, etc). For inference, once the models are loaded and initialized it takes just under 1 second per image on the Jetson Nano to run through MegaDetector and a little less than that to run through the classifier model. So maybe inference at 1.5 seconds per image. We are seeing a full cycle of wake up, pull images off trail cam, detect/classify, transmit via satellite, sleep typically takes about 5 mins.

👍 Carly Batist, Jason Parham, Rowan Converse, Valentin Lucet, Timm Haucke
Michael Bunsen (notbot@gmail.com)
2022-09-29 12:32:31

*Thread Reply:* This is fantastic. Congratulations on making an end-to-end system, and with a satellite connection! I look forward to looking at the design and code when you are ready, and I would be happy to test them. What did you use for the camera?

Luke Sheneman (sheneman@uidaho.edu)
2022-09-29 13:00:48

*Thread Reply:* Thanks @Michael Bunsen. It currently works with a Moultrie M-50i camera. As built, it could work easily with other cameras that support automated "USB Mode" where it presents as a USB drive when it detects power on its underside USB port.

Dan Morris (agentmorris@gmail.com)
2022-09-29 13:17:22

*Thread Reply:* Very cool! There are a number of related efforts (e.g. the Conservation X Labs work announced on another thread), but AFAIK all the other work like this is targeting a very general-purpose device, which will definitely be necessary to shift the field in this direction, but I think we'll also need some successes where projects like this make something work end-to-end for the team that's building it, starting in a small (ideally n=1) number of ecosystems, where you can really kick the tires on your own end-to-end process, and inform all of the work on general-purpose devices. Excited to hear how field tests go!

👍 Luke Sheneman, Jason Holmberg (Wild Me), Valentin Lucet, Carly Batist, Sara Beery
➕ Sara Beery
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2022-09-29 14:00:58

*Thread Reply:* Awesome work!

Timm Haucke (timm@haucke.xyz)
2022-09-29 16:40:47

*Thread Reply:* @Luke Sheneman looks super interesting! We are also using the Jetson Nano for a recent camera trap prototype and have some experience with optimizing power consumption & MegaDetector models, so let me know if I can be of any help!

❤️ Sara Beery, Luke Sheneman, Carly Batist
Luke Sheneman (sheneman@uidaho.edu)
2022-09-29 18:19:14

*Thread Reply:* Thanks @Timm Haucke! I took a glance at your paper and it looks really interesting. We'll assess real-world performance and battery life in the field soon, and I am sure I will have some questions!

👍 Timm Haucke
Frederic (frederic@apic.ai)
2022-10-03 09:42:46

*Thread Reply:* I have about 20 jetson Nano (4GB, A02) dev kit left and could sell them, since we switched to the NX. If any one is interested DM me. Currently they are just collecting dust.

Since availability of the dev kit is super bad, this might help some of you. - (Hopefully i am not spamming this thread.)

👍 Carly Batist, Timm Haucke, Sara Beery, Michael Bunsen
🙌 Michael Bunsen, Luke Sheneman
🎉 Michael Bunsen
Jeff Reed (jeff@reedfly.com)
2022-09-29 23:36:43

What a great group and great presentations! It's humbling to be around such smart people...and inspiring to be around people who care about wild places so much. I study animal communication (PhD in computational linguistics) in the Greater Yellowstone Ecosystem and am active in local conservation efforts (mostly policy driven...from wolf conservation to mining restrictions).

  1. Are there any studies on the effectiveness of conservation strategies using game trail cameras: e.g. decrease poaching (there are similar studies of CCTV usage for crime)?
  2. I'm also interested in any efforts to formulate an ethic for the use of camera traps in ecology research or conservation. For example, the strict standards of Yellowstone Park require us to prove a reduced ecological footprint, so we have focused on building very low-power AI cameras for based on PIR signals that don't require much on the edge in terms of compute (we use https://greenwaves-technologies.com/low-power-processor/) and do a REALLY good job of filtering out the majority of false positives (e.g. grass, snow, reflectance)...positives (i.e. non-plant life) are sent to the cloud where they can be efficiently analyzed against very big and diverse models...i.e. we don't see many real-time camera trap use cases that require classification on the edge IF false positives of non-animals are removed from the data flow (see sample architecture image). In other words, if you're going to have a connected camera trap (which is the premise if you're doing AI on the edge) and you want to save battery life, then filter out "non-animals" and send the rest to the cloud for AI analysis, human in the loop, and alerting. Since PIR is the lowest power sensor we have to date, it seems like a great place to focus AI research for low-footprint camera traps; clearly, Google and Ring haven't made it a focus yet with their outdoor products (because they mostly assume power is plentiful).
  3. I'm very interested to see the actual specs of the Conservation X camera that was announced at Edge Impulse yesterday...it looks promising, but it isn't clear if an "IoT platform" comes with it, which is somewhat necessary if you're doing to deploy these at scale in conservation research (e.g. firm updates, model updates). I'm working on a cougar population estimation using camera traps in Yellowstone Park, and was wondering if anyone had a contact at Conservation X that would be willing to talk over the phone to see if the timing of our project and the release of their cameras would make it feasible to use. Thanks again for putting on such a great event 🙂 And if any of you are working on PIR AI models, I would love to collaborate.
GreenWaves Technologies
Est. reading time
4 minutes
😎 Jon Van Oast, Sara Beery, Timm Haucke, Olivier Gimenez
Giana Cirolia (giana@berkeley.edu)
2022-10-01 16:22:29

Hi!

I am very new to this group and my background is not AI or environmental science (I am a PhD in human microbiome research)

But…I am helping a native community member of the Dakota people plan environmentally based youth development activities on a plot of land she has recently purchased for the sole purpose of ecological rehabilitation and community centered environmental stewardship.

One thing they would love to do is center youth led research projects (with long term benefit) which focus on environmental preservation, and or, tracking the outcomes and benefits of traditional/holistic restoration initiatives in grasslands and natural free range pastures.

I was wondering if anyone had ideas of ways in which the tools, processes and initiatives you work on could integrate into a youth/native people centered land restoration and community empowerment project on a long term protected site.

Would love anyones thoughts about how the youth could be agents of change in this process and also how they could get more connected to stem learning and resources from perspectives they care about.

Looking forward to hearing from you all!

👍 Jaanak, Sara Beery, Dan Morris, Abhay, Jon Van Oast, Aleksis Pirinen, Anjali Ravunniarath, Carly Batist, Elizabeth Bondi, Toryn Schafer, Eddie Zhang, Emerson de Lemmus, Ashley Chang (she/her), Gedeon, Adam Noach, Rita Pucci, nyakundi lamech
👍:skin_tone_3: Pen-Yuan Hsing
Dan Morris (agentmorris@gmail.com)
2022-10-01 17:12:59

*Thread Reply:* I don't exactly have an answer for you, but I know some colleagues at the Point No Point Treaty Council:

https://pnptc.org/

...who are using technology (i.e., the kind of technology we talk about on this Slack) for wildlife surveys in the area governed by the Treaty of Point No Point:

https://goia.wa.gov/resources/treaties/treaty-point-no-point-1855

...which protects fishing and hunting rights for several tribes in Western Washington. If you email me offline (agentmorris@gmail.com), I'm happy to make an introduction if that would be helpful.

❤️ Sara Beery, Jon Van Oast, Tiziana Gelmi Candusso, nyakundi lamech
Giana Cirolia (giana@berkeley.edu)
2022-10-05 18:45:30

*Thread Reply:* Thank you so much!

Giana Cirolia (giana@berkeley.edu)
2022-10-05 18:50:58

*Thread Reply:* I know the young folks are already involved in learning the names of and identifying their wild and natural species.

Would curated data collection on the ground or cameras set up to survey native species be of use to anyone in your network?

Perhaps passive collection as the young people survey their land is helpful? If so, I would be very happy to see how their “species learning” could serve this conservation community group as well.

Fridah Nyakundi (nyakundi@stanford.edu)
2022-10-14 17:19:25

*Thread Reply:* @nyakundi lamech was involved in a youth project that does exactly this. But I don't know how valuable comments from a project outside of the US would be to you?.

Giana Cirolia (giana@berkeley.edu)
2022-10-14 17:50:27

*Thread Reply:* Always valuable thank you! Please message me I would love to learn from this expertise and experience

Irené Tema (irenetema2014@gmail.com)
2022-10-09 16:16:33

Hi everyone!

My name is Irené Tematelewo (I go by Tema). I am a PhD student in computer science at Oregon State University, working with Prof. Thomas Dietterich. My research is on anticipatory distribution shift correction and, I'm currently exploring the Snapshot Serengeti CameraTrap dataset in an effort to forecast changes in species population over time based on captured images. Any pointer to other interesting image based classification dataset with temporal information is welcome.

I'm here thank to @Bistra Dilkina and I'm delighted to join the community.

👋 Suzanne Stathatos, Declan, Eddie Zhang, Ashley Chang (she/her), Lucia Gordon, Aleksis Pirinen, Rita Pucci, Valentin Gabeff, Sara Beery
👍 Bistra Dilkina
Suzanne Stathatos (suzanne.stathatos@gmail.com)
2022-10-09 16:20:47

*Thread Reply:* LILA BC is a good place to look for camera trap datasets.

Are you looking for temporal shifts at a multi-year-long level, though, if you’re looking for population changes?

Irené Tema (irenetema2014@gmail.com)
2022-10-09 16:32:08

*Thread Reply:* Thanks @Suzanne Stathatos! I'm mostly looking for classification datasets with trend and/or seasonal (or both) change in the class probabilities over time. For example, when studying the Snapshot Serengeti dataset, I noticed that the estimated population of Wildebeest is close to zero during dry months because they migrate out of the Serengeti for locations where there is grass and it when starts raining in the Serengeti, they migrate back because the grass is more rewarding. So my study of the animal population over time is to find those species with predictable changes.

Sara Beery (sbeery@caltech.edu)
2022-10-11 04:28:07

*Thread Reply:* Very cool!! I think Snapshot Serengeti is the longest-running accessible, public dataset with images from camera traps, but iNaturalist could be another interesting modality to investigate? There you also have the added complexity of opportunistic sampling, but in aggregate changes are captured over time

👍 Irené Tema
Kakani Katija (kakani@mbari.org)
2022-10-11 09:26:20

*Thread Reply:* I would also check out ecotaxa. They focus largely on plankton classification with multi-year observations using a suite of underwater and in-lab bright field imaging tools.

👍 Irené Tema
Irené Tema (irenetema2014@gmail.com)
2022-10-11 14:51:19

*Thread Reply:* Thanks @Sara Beery and @Kakani Katija. I'll check them out

Rita Pucci (rita.pucci85@gmail.com)
2022-10-10 06:40:08

Hi to all of you!

👍 Aleksis Pirinen
Rita Pucci (rita.pucci85@gmail.com)
2022-10-10 06:41:11

I just want to say that from the 1st of November I will be part of the Naturalis and I am super excited about that! I will be working on AI & Biodiversity with a great team of experts in both AI and Biodiversity!

👏 Silvia Zuffi, Diego Marcos, gvanhorn, Stephanie O'Donnell, Adam Noach, Lucia Gordon, Rita Pucci, Ashley Chang (she/her), Sara Beery
🙌 vishva shah
Thijs (thijs@q42.nl)
2022-10-10 13:16:42

*Thread Reply:* Naturalis Biodiversity Center in Leiden?

Thijs (thijs@q42.nl)
2022-10-10 13:16:52

*Thread Reply:* @Rita Pucci?

Rita Pucci (rita.pucci85@gmail.com)
2022-10-10 14:08:54

*Thread Reply:* Yes!! Exactly

Thijs (thijs@q42.nl)
2022-10-10 14:41:47

*Thread Reply:* Nice, I live in Leiderdorp, do you live in Leiden area? Might be nice to catch up?

Rita Pucci (rita.pucci85@gmail.com)
2022-10-10 14:49:33

*Thread Reply:* Yes!!

Rita Pucci (rita.pucci85@gmail.com)
2022-10-10 14:49:40

*Thread Reply:* Super cool!

Rita Pucci (rita.pucci85@gmail.com)
2022-10-10 06:41:49

🎉💃🎉💃🎉💃🎉

Pietro Perona (perona@caltech.edu)
2022-10-11 01:14:18

How did they count the frogs? https://www.bbc.co.uk/news/science-environment-63206140

BBC News
👍 Jorrit van Gils, Carl Boettiger
Carl Boettiger (cboettig@berkeley.edu)
2022-10-25 13:08:02

*Thread Reply:* Yeah that's a great a question. As they describe it, it sounds like these authors base their estimates on what we call dynamic occupancy models, ( https://www.pnas.org/doi/10.1073/pnas.2123070119) Such models essentially seek to model the probability of detection for some latent occupancy, (see methods section and appendix https://www.pnas.org/doi/suppl/10.1073/pnas.2123070119/suppl_file/pnas.2123070119.sapp.pdf). such models are typically bayesian hierarchical models estimated by MCMC -- they provide their code at the end of the supplement).

I think this area is ripe for advances from the ML community but doesn't get as much attention as it could! Numerical constraints often mean that these models must ignore lots of factors that probably also influence probability of detection, as well as generally ignoring ecological factors like whether like to group together or establish separate territory (i.e. so that observations are not properly independent) etc etc. Because there is no perfect census method to compare to, it's rather difficult to say how well current methods of counting actually work.

Titus (titus@colossal.com)
2022-10-11 07:58:08

Hey Everyone, I’m new to this community. My name is Titus and I’m the VP of Strategy and Head of Computational Sciences at Colossal Biosciences. My background is in AI+Genomics but my team is working across genomic, audio, and imaging methods for conservation and de-extinction. This community is great and looking forward to chatting with people as the opportunity comes up! If anyone ever needs anything or wants to chat, shoot me a note.

colossal.com
😊 Aleksis Pirinen, Alexander Robillard, Carly Batist, Otto Brookes, Sara Beery
💡 Ștefan Istrate, Alexander Robillard
👏 Stephanie O'Donnell, Alexander Robillard, Jon Van Oast, Irené Tema, Adam Noach, Declan
Giana Cirolia (giana@berkeley.edu)
2022-10-12 20:04:15

*Thread Reply:* Hi!

Maybe it is far afield but one area where we are really lacking in preservation efforts in the biodiversity created by diverse diets, food practice and local ecologies.

I’d be thrilled to discuss further if your team has been thinking at all about how to preserve and maintain vaninshing human/health associated microbes and plants :)

I am a human microbiome PhD student at UCB but also part of their nsf digital transformation of development cohort which helps PhD students center their questions in translational efforts that serve humanity, particularly vulnerable groups (people, species etc).

I am looking for better ways to expand the questions we ask around the human biome to push preservation of diverse cultural food practice and diverse edible species.

If any of that helps I would be very happy to talk more (at least on the bio side of things).

I am still teaching myself the computational side!

Feel free to email Giana@berkeley.edu

Kieran (kag25@sussex.ac.uk)
2022-10-12 10:55:15

Hi all, I'm new to this space. I'm doing a PhD at Sussex University looking at building interpretable machine learned representations of soundscape audio recordings collected from ecosystems - i.e. ecoacoustics. I'm a data scientist and FOSS (free & open source) software engineer by trade, my ecological knowledge is not learned within an academic environment but from trudging around various landscapes on foot! Looking forward to chatting about the use of sound (and more) as a tool for tracking biodiversity, changes in habitat quality and more.

👍 Oisin Mac Aodha, Dan Morris, Adam Noach, Declan, Ivor Simpson, Justin Kay, Ali Johnston, Barbie D, Julia Marisa Sekula, Sara Beery, Aleksis Pirinen, Jinsu Elhance
👍:skin_tone_3: Pen-Yuan Hsing
Julia Marisa Sekula (jmsekula@stanford.edu)
2022-10-12 14:41:31

Hi All! I’m a Stanford MBA |Msc and currently doing an independent study project looking to understand the dynamics of the 🧬 genomic data market and how industry is thinking about potential new sources of this data - especially for conservation/biodiversity purposes 🌿. For this, we are speaking to Academics, Pharma, SynBio, to Insurers. If anyone has any leads or would like to chat - we would love an introduction!

🙌:skin_tone_3: Pen-Yuan Hsing
🙌 Jon Van Oast, Suzanne Stathatos, Dhruv Sheth, Sara Beery, Fridah Nyakundi
Giana Cirolia (giana@berkeley.edu)
2022-10-12 19:58:58

*Thread Reply:* Hi! I am not involved in the purely plant side of this, but I know many efforts are under way to think about conservation of human internal ecological biodiversity.

It’s a really difficult topic because it’s not easy to fund collections in non US populations and also there are limited frameworks for ethical and human/culturally centered approaches to storing and using that data.

Happy to talk more if any of my information relative to the human biome can help!

Giana Cirolia (giana@berkeley.edu)
2022-10-12 19:57:13

Hi All,

Thank you again for your initial connections relative to native land rehabilitation.

I was wondering if I could ask a further question.

Many tribes seek to restore their lands from previous monoculture farmland or pesticide exposure into natural habitats which support small sustainable farming efforts, biodiversity restoration and rejuvenation of native species.

Are there ways that current satellite and land survey data are being leveraged give remote help to develop recommendations for targeting/ organizing soil/species recovery processes?

For example, from such data has anyone developed processes for identifying the needs of the soil (without physical testing) or the locations most primed to support self-sustained regrowth of native places and recovery of native species?

Is this very very domain specific or can generalized insights for soil management suggestions be gleaned from such data without feet on the ground testing? Or in combination with ground testing?

😯 Aleksis Pirinen, Pen-Yuan Hsing
❤️ Adam Noach, Aleksis Pirinen, Alexander Robillard, Sara Beery, Yseult Hb
Alexander Robillard (RobillardA@SI.EDU)
2022-10-13 10:04:35

*Thread Reply:* Hi Giana, this sounds very similar to some of the work being done by Propogate- much of their tools do this with a focus on agroforestry as a method for sustainable economic development, conservation and carbon sequestration, you might find their analytics tools to be exactly what you're looking for. Their tools utilize multiple input layers to assess soil quality, hydrology etc to outline which species might be best to plant. Keep in mind theyre not doing this directly with AI but with pre-generated spatial layers. Their resolution is pretty decent as well. Happy to put you in touch with one of my friends who works there if its of interest. https://propagateag.com/

Best of luck!

propagateag.com
Giana Cirolia (giana@berkeley.edu)
2022-10-13 11:26:49

*Thread Reply:* My goodness that would be such a gift! Thank you so much I would be so grateful.

My email is giana@berkeley.edu :)

👍 Alexander Robillard
Howard Windsor (Wildbook@hwindsor.me.uk)
2022-10-13 09:23:47

Hi all, I'm a UK based experienced software engineer (20+ years) who now has some time and a desire to use my skills and experience to aid conservation and help combat climate change rather than grow shareholder profit. I've worked extensively on embedded systems, everything from writing direct to HW registers in device drivers on bare metal systems through RTOS and embedded linux. I have also worked on flask based REST API Web servers (github.com/hwindsor). I have seen and written well designed software systems that are simple to extend. I have also experienced the troubles with badly designed systems so can help a lot on problems to avoid. If you have any software projects that you would like some help on, design as well as development, please drop me a DM and let's see what we can do together.

👋 Josh Seltzer, Adam Noach, Dan Morris, Jon Van Oast, Eelke, Ted Schmitt, Barbie D
👍 Aleksis Pirinen, Sam Kelly, Sara Beery, Luke Sheneman
❤️ Giana Cirolia, Mark Fisher, Pen-Yuan Hsing
👍:skin_tone_3: Pen-Yuan Hsing
😎 Jon Van Oast
Yuanqi Du (yd392@cornell.edu)
2022-10-14 17:11:04

Hey guys, my name is Yuanqi Du (https://yuanqidu.github.io/), a first year CS PhD student at Cornell working with Prof. Carla Gomes. I have been working on molecular simulation and drug discovery for about three years, and now I am new to materials science and computational sustainability! I hope to learn a lot from you! Also, we are leading an AI4Science101 initiative https://ai4science101.readthedocs.io/en/devel/index.html where we aim to provide a series of motivational and overview blogs to introduce both AI tools and Science problems to people and motivate people to join the exciting field, if anyone is interested in contributing interesting things related to conservation, please DM me!

Yuanqi Du's Personal Website
👋 gvanhorn, Justin Kay, Declan, Avi Sundaresan, Dan Morris, Carly Batist, Kangyu Zheng, Sara Beery, Jaanak, Emilio Luz-Ricca, Gedeon, Lily Xu, Chris Yeh
Kasirat (kasirat_turfi@hotmail.com)
2022-10-16 01:46:30

Hi everyone, anybody working with point clouds? Want to connect with people who are working with forest point clouds!🌲either ALS or TLS!

👀 Ethan Shafron
👍 Chinmay Talegaonkar, Aleksis Pirinen, Gedeon
Eddie Zhang (ete@ucsb.edu)
2022-10-17 10:53:27

Just stumbled on this competition, super super cool! www.ai4climatecoop.org

🎯 Josh Seltzer, Sara Beery
Eddie Zhang (ete@ucsb.edu)
2022-10-17 10:55:07

*Thread Reply:* I'm thinking about getting involved right now, dm me if it looks interesting to you

Josh Seltzer (jyseltz@gmail.com)
2022-10-17 11:28:19

*Thread Reply:* I just signed up the other day! I haven't gotten around to watching the introduction videos yet, as there is a lot to take in and some of it is honestly beyond my comprehension at this point. I'm definitely interested though and would love to team up with someone : )

Anselm Bradford (ans@anselmbradford.com)
2022-10-18 13:02:34

https://openopps.usajobs.gov/tasks/4347?fromSearch 4 month plus detail with NOAA “AI-Ready Data Lead for the NOAA Center for Artificial Intelligence - Open to All”

👍 Justin Kay, Leonardo Viotti
Alessandra Sellini (sellini.alessandra@gmail.com)
2022-10-19 06:38:24

Hello everyone! Since I’m new in this community, I’d thought to introduce myself 😄

I completed an MSc in Marine Biology and Ecology at James Cook University, Australia, then moved to the Philippines to work as a research assistant and research fellow at a local NGO 🌿 There I first approached the world of tech and conservation while working on the creation of 3D reef models based on DO-SVS footage. At the beginning of the pandemic, I moved back to Italy, my home country, and worked for two years at the WWF Italy Conservation Office as an Associate 🐼 Besides stakeholder engagement and drafting different documents, I had the opportunity to become a drone pilot. Working on a project for cetacean conservation, we planned to deploy a drone to gather footage and genetic material from whale blows 🐋

I grew fascinated by the possibilities offered by the intersection between tech and conservation and decided to pursue this path. I entered the world of AI/ML after attending a 9-week Data Science bootcamp in Brussels. As a final project, my team and I developed a model to target new potential Marine Protected Areas using current environmental, biodiversity and fishing data. Having only two weeks, we chose to focus on European waters and the Arab Sea 🌊

I’m now looking for new adventures 🚀 If anyone is interested in talking about AI, ML, or marine biology, I would love to hear from you!

👍 Valentin Lucet, Rebecca, Alexander Robillard, Dan Morris
👋 Viktor Domazetoski, Rebecca, Adam Noach, Alexander Robillard, Déva Sou, Lucia Gordon, Anjali Ravunniarath, Taiki Sakai - NOAA Affiliate, Suzanne Stathatos, Jon Van Oast, Howard Windsor, Yseult Hb, Eddie Zhang, Malika Nisal Ratnayake, Kieran, Jaanak, Moira Shooter, Omiros Pantazis
Jorrit van Gils (vangilsjorrit@gmail.com)
2022-10-21 08:57:00

Hi all,

Programming skills are very useful as Ai for conservation researcher. Therefore, I am looking for a par-time programming job where I can increase Python skills, preferably related to computer vision/AI. Last year I worked for an AI website where I learned working with datasets, object oriented mapping, SQL, GIT and Docker. Despite the steep learning curve, I'm not an experienced programmer yet. So, I’m looking for an opportunity where I can contribute while improving these skills. It would be very much appreciated if someone can point me in the right direction.

For more info please check my webpage.

Thanks a lot!

Jorrit

👍 Jon Van Oast, Kai Waddington, Helena Russello, Peter van Lunteren
🙌 Abhay
Sicily Fiennes (sicilyfiennes@gmail.com)
2022-10-23 07:11:08

Hi everyone,

Just a couple of queries from me! I’m working on an application for the CV4Ecology summer school and trying to make a good labeling plan, which was one of the reasons why I was unsuccessful the first time applying. Part of my PhD project is using machine learning to improve visual identification for birds in the wildlife trade (focussing on the case study of . My existing dataset consists of labeled images of over 100 species, with between 70-300 image per species. I’m also interested in attempting multi species identification in images and object detection for images in marketplaces where there are many individuals. If any successful applicants from last year want to get in touch please do!

Secondly, I’m looking for a little extra funding (£1,000-2,000) for some fieldwork. If anyone knows of any small pots of money closing soon, that don’t have long decision time, feel free to comment below or PM me, I’d be so grateful!

Rowan Converse (rowanconverse@unm.edu)
2022-10-23 22:28:06

*Thread Reply:* Hi Sicily, I was in the 2022 cohort and would be happy to chat! Send me a DM 🙂

Taiki Sakai - NOAA Affiliate (taiki.sakai@noaa.gov)
2022-10-25 13:04:54

*Thread Reply:* Hi Sicily! I was also in the 2022 cohort and happy to chat if you want (also hi Rowan!)

👋 Rowan Converse
Sicily Fiennes (sicilyfiennes@gmail.com)
2022-10-25 17:17:29

*Thread Reply:* Hi both! Thanks for the message this sounds great. Will PM you!

Phuc Le (phuc.le@ug.fuv.edu.vn)
2022-10-23 12:14:09

Hi everyone, my name is Phuc, I’m a Vietnam-based CS undergrad student. I am currently working on my capstone thesis which develops an AI-intergrated mobile app that is able to recognize 26+ species of native and non-native freshwater turtles and tortoises in Vietnam to combat illegal trutle trades both online and offline. Primary users will be law enforcers, then nature enthusiasts, educators, etc. The model should be able to classify turtle species in images from the field or social media. I am in data collection phase and doing that by taking photos of turtle species being protected in a rescue center. They have >2000 individuals of 22 species, which is enough for me to take pictures of. However, I have some questions that dont know how to start with

  1. What device should I use to take pictures? mobile phone?, a Sony camera?, or a mixture of both? Is resolution a matter in self-collected data?
  2. What characteristics of the data should I taken into accounts (for example, quantity, diversity in background, lighting, camera angles, turtle individuals) to ensure generalizability? Could anyone have experiences in projects that use self-collected data - not dependent on Internet sources - to train ML/AI model share some thoughts on these questions? Are there any guidelines out there that I can apply to my project? Or if you know any papers or projects that also intend to create animal identification apps for similar purpose, please share. This is my very first big AI project so please forgive if I ask silly questions :) Thanks everyone in advance!
Valentin Ștefan (valentin.stefan.vst@gmail.com)
2022-10-24 10:33:57

*Thread Reply:* You can have a look at the work done by iNaturalist - the seek app, app, or https://play.google.com/store/apps/details?id=org.plantnet&hl=en&gl=US|PlantNet app. The more sensors you use to capture the images, the better would be for generalizability, but from my experience with CNN for insect localization & classification, if you develop a model that will be used on a phone, then capture your training images with the phone, exactly how you would expect a user to do it. My main worry is that if you aim for too much generalizability, then it might not work at some point. See also the guidelines of YOLOv5 at https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results

play.google.com
Alexander Robillard (RobillardA@SI.EDU)
2022-10-24 14:45:21

*Thread Reply:* Hi Phuc, feel free to send me an DM/email, happy to make some time to talk if you're interested! Key features for turtles are mostly carapace and plastron shots. Side photos of the head being the second most important feature. I also strongly recommend trying to develop a mask/segmentation model to extract the shell

Phuc Le (phuc.le@ug.fuv.edu.vn)
2022-10-26 05:51:39

*Thread Reply:* Thanks @Valentin Ștefan for the insights! I have used Pl@ntNet before and was impressed by its ability to recognize in Vietnam. I totally forgot about it until you mentioned. I will take a look at their technology. I also used YOLOv5 to localize the position of the turtle in an image and it generalized quite well. But I think I will use different networks for the classification task because I want the model to return many predictions so that users can verify them. But still the guide from YOLOv5 team is very informative and helpful!

Phuc Le (phuc.le@ug.fuv.edu.vn)
2022-10-26 05:52:07

*Thread Reply:* Hi @Alexander Robillard, thanks for your note, I have DM-ed you for further discussion 🙂

Valentin Ștefan (valentin.stefan.vst@gmail.com)
2022-10-26 05:57:08

*Thread Reply:* Note that, with the new release of YOLOv5 - v6.2, you can also do image classification not only localization. I think you can tell the model to return several predictions (possibly with the --conf-thres and --iou-thresin the detect.py script. If you explore further with YOLOv5, let me know as I am also interested in how it performs and how can the weights be deployed as an app on a phone. At the moment, I am unsure about how fast will it run on a phone and I do not have the coding skills yet to build an app. I presume the nano weights version would be a better option

Phuc Le (phuc.le@ug.fuv.edu.vn)
2022-10-26 06:29:04

*Thread Reply:* I didnt know that they have classification versions. Thanks a lot. I will test these models out.

Anh Quoc Nguyen (nguye2aq@mail.uc.edu)
2022-10-26 20:33:19

*Thread Reply:* Imo, u can just take the photos with the best quality, then lower its quality during training as augmentation. Also, i dont see why u should stick to yolov5, u can just do yolov7, any other new SOTA ones. And yes, u should take into accounts all those data characteristic. Hey dm me, I'd love to discuss and help out

Phuc Le (phuc.le@ug.fuv.edu.vn)
2022-10-26 21:21:05

*Thread Reply:* @Anh Quoc Nguyen I've DM-ed you!

Alayna Van Dervort (av@thebigwild.com)
2022-10-26 12:48:45

Hello All,

Alayna Van Dervort (av@thebigwild.com)
2022-10-26 12:49:13

Can I gain some advice on the best way to get an animal to look directly at a camera trap?

Michael Procko (xprockox@gmail.com)
2022-10-26 12:50:11

*Thread Reply:* Is this for a specific animal or all species that might be present in the study area?

Carly Batist (cbatist@gradcenter.cuny.edu)
2022-10-26 12:50:27

*Thread Reply:* Do you mean by some sort of light/sound stimulus?

Alayna Van Dervort (av@thebigwild.com)
2022-10-26 12:51:01

*Thread Reply:* This one in particular is for Large Cats

Alayna Van Dervort (av@thebigwild.com)
2022-10-26 12:51:38

*Thread Reply:* I have mega tons of data that is from the side but very limited data on direct facial images

Alayna Van Dervort (av@thebigwild.com)
2022-10-26 12:52:02

*Thread Reply:* (especially because much of the data is at night)

Alayna Van Dervort (av@thebigwild.com)
2022-10-26 12:53:14

*Thread Reply:* I see much of the Snow Leopard cats take a min to look directly at the camera.. I would love to know how.why

Michael Procko (xprockox@gmail.com)
2022-10-26 12:54:05

*Thread Reply:* Not sure if you can use images for this (may need video because it happens so fast), but mountain lions often look directly at cameras before fleeing when the cameras are paired with audio playback of human voices... https://www.washingtonpost.com/news/speaking-of-science/wp/2017/06/21/mountain-lions-are-terrified-by-the-voices-of-rush-limbaugh-and-rachel-maddow/

Washington Post
Michael Procko (xprockox@gmail.com)
2022-10-26 12:55:39

*Thread Reply:* If you're restricted to using just images, and no video, I would imagine you'd need a scent lure or something they are more likely to investigate for a longer period of time.

Alayna Van Dervort (av@thebigwild.com)
2022-10-26 12:57:38

*Thread Reply:* Interesting, thank you. Rush L. WOuld scare the hell out of me too ;) I am looking for noninvasive methods. We are also collecting video and I was wondering about the use of scent. DO you or anyone have experience using it?

Michael Procko (xprockox@gmail.com)
2022-10-26 12:59:56

*Thread Reply:* Yes, but I would have to dig it up from archives of messages, and am running to a meeting now. I will make a note to come back and respond!

Alayna Van Dervort (av@thebigwild.com)
2022-10-26 13:13:42

*Thread Reply:* Thank you!

Dan Morris (agentmorris@gmail.com)
2022-10-26 17:09:35

*Thread Reply:* I've only owned cheap consumer camera traps, but I'm curious whether anyone knows about the bandwidth of the IR flash on research-grade vs. consumer-grade cameras. I wouldn't say it's "giant red light" on my cheap consumer camera trap, but it's close: you can definitely see visible red during the flash, and every coyote walking by the camera looks right at it. I would guess the flash is much narrower in bandwidth on serious cameras, but I have no data to back that up. But if that's accurate, maybe a solution is: use a terrible camera?

👍 Mitch Fennell
Dan Morris (agentmorris@gmail.com)
2022-10-26 17:09:46

*Thread Reply:* [doesn't help you during the day]

Dan Morris (agentmorris@gmail.com)
2022-10-26 17:11:21

*Thread Reply:* Of course, sound will be more effective in attracting attention (I assume), but it sounds like you're trying to walk the fine line between attraction attention and deterring presence, and maybe "unintentional red flash" hits that sweet spot.

Dante Wasmuht (dante@conservationxlabs.org)
2022-10-27 03:53:45
Dante Wasmuht (dante@conservationxlabs.org)
2022-10-27 04:14:57

*Thread Reply:* sounds like they played a recording of a juvenile puma call plus light flash to get a mugshot

Paul Allin (allinpaul@gmail.com)
2022-11-01 03:44:55

*Thread Reply:* baited camera traps can work but then your observations are beginning to interfere with their natural behaviour so depends a bit on the overall objective

Scott Hosking (jshosking@gmail.com)
2022-10-27 10:49:58

New Special Interest Group: Biodiversity monitoring and forecasting How can we best catalyse and champion the development of new AI and data-driven methods for monitoring and forecasting biodiversity change?https://www.turing.ac.uk/research/interest-groups/biodiversity-monitoring-and-forecasting

🎉 Oisin Mac Aodha, Aleksis Pirinen, Sara Beery, Dan Morris, Carly Batist, Tiziana Gelmi Candusso, Ali Johnston, Emily Charry Tissier, Cathy Atkinson, Ando Shah
Sara Beery (sbeery@caltech.edu)
2022-11-01 13:36:37

We're hosting an InfoSession for those interested in the CV4Ecology Workshop in 2023 this Thursday Nov. 3 from 10-11PT! More details here: https://twitter.com/cv4ecology/status/1587217345180622849

👍 Oisin Mac Aodha, Subhransu Maji, Josh Veitch-Michaelis, Dan Morris, Valentin Ștefan, Mitch Fennell, Taiki Sakai - NOAA Affiliate, Kalyan Nadimpalli, Catherine, Lukas Picek, Stephanie O'Donnell, Riccardo de Lutio, Déva Sou, Ben Williams
👍:skin_tone_3: Pen-Yuan Hsing
😎 Jon Van Oast, Emerson de Lemmus
Ben Weinstein (benweinstein2010@gmail.com)
2022-11-02 12:50:22

Its grant writing season. Quick poll! What does the AI for ecology community need in the next five years? > 🐵 New data like benchmarks, hardware and image databases > 🎈 New open source models that perform reasonable predictions on common ecological tasks (e.g. megadetector -> camera traps, deepforest -> trees, merlin -> bird sound id) > 🐘 Better tools that allow ecologists to create machine learning models with GUI interfaces/relatively little code (e.g. AIDE) > 🦕 Better algorithms that allow existing data to applied more easily across space and time (class imbalance, few shot learning, geographic/taxonomic generalization). > You are welcome to add comments, but please vote for only one.

🦕 Ben Weinstein, Dante Wasmuht, Toryn Schafer, Matt Weldy, Yseult Hb, Alba Solsona, Lukas Picek, Omiros Pantazis, Tiziana Gelmi Candusso, Taiki Sakai - NOAA Affiliate, Déva Sou, Josh Veitch-Michaelis, Emerson de Lemmus, Daniel Grzenda, Devis Tuia, Sara Beery, Ethan Shafron, Claudia Haas, Kakani Katija, nyakundi lamech, Jason Holmberg (Wild Me), Sicily Fiennes
🐵 Georgia Atkinson, Lukas Picek, Valentin Lucet, Heather Lynch, Valentin Ștefan, Emilio Luz-Ricca, Yves Bas, Sara Beery, Ethan Shafron, Ando Shah, Kakani Katija
🐘 Carly Batist, Mitch Fennell, Felipe Parodi, Tiziana Gelmi Candusso, Blair Costelloe, Peter van Lunteren, Rowan Converse, Anton Alvarez
🎈 Mitch Fennell, Felipe Parodi, Dan Morris, Devis Tuia, Stefan Schneider, Sara Beery, Kelly Easterday, Peter van Lunteren
Valentin Lucet (valentin.lucet@gmail.com)
2022-11-02 14:49:32

*Thread Reply:* I've only been around this space for a short while, but I've seen the discussions that you led about standards and how "not to re-invent the wheel", especially when it comes to processing images. I think that's another one to add to the list: we have many tools but nothing that has been widely embraced (except maybe for megadetector?).

Peter van Lunteren (contact@pvanlunteren.com)
2022-11-03 17:37:36

For those of you who are interested: I’ve made a GUI for Windows, Mac and Linux which uses the MegaDetector model to analyse images. The idea behind this is to make the power of MegaDetector available for non-coding ecologists too. Opening and installing the GUI is done by bash/batch files, so the user doesn’t have to be bothered with installing anaconda, git for windows, labelImg and all the other prerequisites.

Besides running MDv5, it can: • Place empty images, people, vehicles or animals in subfolders • Export .xml label files in Pascal VOC format for further model training • Create an input file for further processing in Timelapse • Manipulate data by drawing boxes or cropping detections • Review and edit annotations using the open-source annotation software labelImg If you know any colleagues or friends who can use this GUI, spread the word! -> https://github.com/PetervanLunteren/EcoAssist

Stars
17
Language
Python
👍 gvanhorn, Dan Morris, Timm Haucke, Mitch Fennell, Tiziana Gelmi Candusso, nyakundi lamech, Yuerou Tang, Evan Hallein, Jason Holmberg (Wild Me)
🙌 Suzanne Stathatos, Lucia Gordon, Felipe Parodi, Taiki Sakai - NOAA Affiliate, Carly Batist, Cameron Trotter, Jacob Kamminga, Timm Haucke, Talia Speaker, Yseult Hb, Ethan Shafron, Tiziana Gelmi Candusso, David Will, Luke Sheneman, Kakani Katija, Jaanak, nyakundi lamech, Jason Holmberg (Wild Me)
🎉 Jon Van Oast, Fadel, Valentin Gabeff, Gedeon, Timm Haucke, Sofía Miñano, Yseult Hb, Tiziana Gelmi Candusso, Sara Beery, David Will, nyakundi lamech, Jason Holmberg (Wild Me), Dan Morris
💯 Eelke, Sam Kelly, Dante Wasmuht, Timm Haucke, Tiziana Gelmi Candusso
👍:skin_tone_3: Pen-Yuan Hsing
😍 Sara Beery, Anton Alvarez
Jon Van Oast (jon@wildme.org)
2022-11-03 17:48:42

*Thread Reply:* this is great! thank you.

Dan Morris (agentmorris@gmail.com)
2022-11-03 18:54:47

*Thread Reply:* Wow, you've added tons of features here! Out of curiosity: it's rare for an ecologist to need to add/manipulate bounding boxes for analysis purposes, so I assume that you and/or users are using that functionality to either fine-tune MegaDetector or train a new detector... is that right? Can you share anything about what the edited boxes are being used for?

Peter van Lunteren (contact@pvanlunteren.com)
2022-11-04 01:06:07

*Thread Reply:* A user wanted to use MD to kickstart his annotation process, so that he didn’t have to manually draw all the boxes. He would then only have to change the label to the species (and fine-tune the results, if needed). With this reviewed and annotated data he would train his own site-specific model. That’s why I added the feature of converting MD’s output.json to individual .xml files and getting rid of the labelImg installation hassle. I agree with you that most users won’t use this feature, but, nevertheless, it’s possible :)

Pen-Yuan Hsing (penyuanhsing@posteo.is)
2022-11-04 08:47:29

*Thread Reply:* Great work, thanks for sharing! If a user already has some of the dependencies on their system, can this tool use them instead of installing everything from scratch with the batch file?

Sofía Miñano (s.minano.glez@gmail.com)
2022-11-04 10:10:24

*Thread Reply:* potentially of interest for future developments: over the summer I worked with others in the DeepLabCut team integrating MegaDetector and DeepLabCut. You can check the prototype at the following HuggingFace space

👀 Anton Alvarez
Tiziana Gelmi Candusso (tiziana.gelmi@gmail.com)
2022-11-04 15:55:10

*Thread Reply:* Ahhh this was exactly what i needed in July. I love this! Thank you so much!

Peter van Lunteren (contact@pvanlunteren.com)
2022-11-04 17:12:08

*Thread Reply:* @Pen-Yuan Hsing The install files search for a working installation of anaconda and git for windows. If present, it will use them. The rest (gits and models), however, will be downloaded from scratch and stored in a hidden folder. This has a few reasons: I) I want the gits to be checked out at a specific time to avoid conflicts with newer commits, II) it will slow down the installation and opening of the app significantly, and III) if the user then adjusts a directory somewhere down the tree, it will error.

@Sofía Miñano very interesting! Might be worth incorporating in a future version of EcoAssist too 🙂

😀 Sofía Miñano
👀 Pen-Yuan Hsing
Luke Sheneman (sheneman@uidaho.edu)
2022-11-07 17:07:14

*Thread Reply:* This is great! I've been contemplating doing this, and it looks like you've done a great job here.

Anton Alvarez (aalvarez@wwf.es)
2022-11-09 07:52:34

*Thread Reply:* AWESOME! Thank you so much for your work! Some Iberian lynx field technicians are starting to use it, and they are fascinated!! They are very happy!! One question...about the use of custom yolov5 model, there are any consideration to take in account? (before to think about a train a new one?) Thanks so much! @Peter van Lunteren

🤩 Sofía Miñano
Peter van Lunteren (contact@pvanlunteren.com)
2022-11-09 15:18:23

*Thread Reply:* Hi @Anton Alvarez, Thanks for your message. Always good to hear that people can use it for conservation 🙂 EcoAssist can run custom yolov5 models if they are retrained from the MegaDetector model using transfer learning. For example, if you find that MegaDetector is not great at recognising a certain species as “animal”, you can retrain the model and add some labelled data of the cases you want it to improve on. Additionally, you can expand on the three default classes MegaDetector uses (“animal”, “person” and “vehicle”) and retrain the model to detect custom classes (e.g. “Iberian Lynx”). In that case, if you add classes, you’ll need to adjust run_detector.py and separate_detections_into_folders.py too. If you need any help with that, let me know!

😍 Anton Alvarez
Anton Alvarez (aalvarez@wwf.es)
2022-11-18 12:13:43

*Thread Reply:* Thank you very much for clarifying everything, it was to have things clear before embarking on the training of the model, (i need to allocate my time or budget) if I have any questions I will contact you. And thank you very much (again) for the new EcoAssist v2.1 version!

😁 Peter van Lunteren
Dan Morris (agentmorris@gmail.com)
2023-01-06 15:07:13

*Thread Reply:* I finally got a chance to try EcoAssist today, and not only am I super-impressed (it worked great on my Windows PC, even using the GPU correctly), the installer scripts you've written are absolute works of art. MegaDetector aside - in fact, conservation aside - you have a really neat approach to packaging and deploying Python code in general for users who have never heard of Python.

Batch file nerds all over the world will be in envy of what you've done in that batch file. I bet somewhere there's a Reddit forum called "things you didn't think you could pull off in a batch file", and if there is, you should totally post there.

The fact that you even did all the magic stuff to allow accelerated inference on M1 Macs - still with a one-click deploy! - is icing on the cake.

😁 Peter van Lunteren
👍:skin_tone_3: Pen-Yuan Hsing
💯 Tiziana Gelmi Candusso
Anna Boser (annaboser@ucsb.edu)
2022-11-04 16:16:56

Does anyone know what it's called/if there are papers on the phenomenon where overall R2 does not equal the mean of R2s calculated over different groups in your data? With the application of this being that a model's predictive ability in space (R2 calculated over temporal groups) versus in time (R2 calculated over spatial groups) is different?

Graeme Phillipson (graeme.phillipson@bbc.co.uk)
2022-11-08 06:35:45

*Thread Reply:* Do you mean like Simpson’s paradox? ( https://en.wikipedia.org/wiki/Simpson%27s_paradox )?

} Wikipedia (https://en.wikipedia.org/)
👍 Jose Ruiz-Munoz
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-11-07 09:42:55

WILDLABS’ annual State of Conservation Tech Survey is open! https://www.linkedin.com/posts/wildlabs-community_tech4wildlife-activity-6994417296681160705-t3mp

linkedin.com
👍 Oisin Mac Aodha, Sara Beery, Jaanak, Catherine, Ted Schmitt
🙌 Stephanie O'Donnell
❤️ Talia Speaker, Sara Beery
Leopoldo André Dutra Lusquino Filho (leopoldo.lusquino@unesp.br)
2022-11-09 13:28:19

Hello everyone! My name is Leopoldo and I'm a newly hired AI Assistant Professor at São Paulo State University. We are starting to develop here an ML project applied to the creation of a smart campus at our university.

We want to deal with the full scope of concerns typical of this domain (energy efficiency, transition to renewable energies, air quality, water consumption, environmental impact of university activities on the local community, etc) using ML models that are energy efficient themselves. We are also interested in discussing fair energy efficiency metrics for ML models, which also takes into account social factors.

Our team has professors from the areas of computing, electrical engineering and environmental engineering, as well as undergraduate, master's and doctoral students and post-docs. Our partnership network for this project also includes other Brazilian universities, such as the State University of Campinas, the Federal University of Rio de Janeiro and the Fluminense Federal University.

Our short-term goal (three next years) is to develop research in green AI and to create a smart campus model that can be replicated for institutions with decentralized campuses. In the long term, we want to create an inter-institutional research center on ML for climate change, with an emphasis on creating public environmental policies for Latin America, especially for the state of São Paulo (Brazil).

We are looking for mentoring from international labs with experience in the areas of ML applied to smart environments and climate change, so that through our partnership we can exchange professors and students between our institutions, create smart campus-based datasets located in developing countries and accelerate our research.

Anyone interested in talking to me about this topic, please send a direct message or an email to leopoldo.lusquino@unesp.br. Thanks everyone in advance!

🙌 Elijah Cole (Deactivated), Jon Van Oast, Stephanie O'Donnell, Emily Lines, Alexander Robillard, Jose Ruiz-Munoz, Aleksis Pirinen, nyakundi lamech, Yseult Hb, Eric Colson
❤️ Tiziana Gelmi Candusso, Sara Beery, Alexander Robillard, Julia Marisa Sekula
👋 Sara Beery, Alexander Robillard, Carly Batist, Vinicius Amaral
Alex Lascelles (alexlasc@mit.edu)
2022-11-11 11:06:20

Hi all!! 👋 My partner is job-hunting for a tech startup whose mission is climate related, one looking to hire full-stack engineers. We're both very climate-conscious but we're new to the climate tech job market -- I'm wondering if anyone here a) knows of such an opportunity, or b) is knowledgeable in this area & could spare 10mins for a chat to educate us a little.

Any help will be immensely appreciated!

p.s. Very excited to join this community! Lots of fascinating and important work you guys are involved with. For anyone interested, my background is UG in physics/astronomy --> MSc in how music and sound affects the brain --> currently work at MIT, CSAIL in the space between cognitive science, computer vision, and AI (http://olivalab.mit.edu/). Great to meet you all! 😊

👀 Josh Seltzer, Sara Beery
Alex Lascelles (alexlasc@mit.edu)
2022-11-11 11:06:40

*Thread Reply:* (Extra info: Preference for an early-stage startup, but mature enough to have at least ~5-10 engineers already. Her background is 6+ yrs in Lead & Senior Software Engineering roles at an e-commerce company, working with React, TypeScript, Gatsby, Node.js, and AWS.)

Sara Beery (sbeery@caltech.edu)
2022-11-11 12:17:43

*Thread Reply:* The Climate Change AI newsletter is also a good place to look for job opportunities!

Toryn Schafer (tschafer@tamu.edu)
2022-11-11 12:51:46

*Thread Reply:* I know people who are or were at Jupiter Intelligence. Looks like they might have some openings: https://jupiterintel.com/jobs/

Jupiter
Est. reading time
2 minutes
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-11-11 13:19:53

*Thread Reply:* We’ve got some job boards on the Conservation Tech Directory! Including the previously mentioned Climate Change AI, which I second as a great resource!

conservationtech.directory
Suzanne Stathatos (suzanne.stathatos@gmail.com)
2022-11-11 13:25:27

*Thread Reply:* There are also 2 slack channels that I know of dedicated to particularly climate-related things: • ClimateAction.TechWorkOnClimate

Alex Lascelles (alexlasc@mit.edu)
2022-11-16 00:26:24

*Thread Reply:* Thank you so much for your suggestions @Sara Beery @Toryn Schafer @Carly Batist and @Suzanne Stathatos, we really appreciate it!

❤️ Suzanne Stathatos
slackbot
2022-11-16 18:28:28

This message was deleted.

Daniel Davila (daniel.davila@kitware.com)
2022-11-16 18:40:29

*Thread Reply:* There is a #jobs channel! Cheers

👍 Sachith Seneviratne
Sachith Seneviratne (sachith.seneviratne@unimelb.edu.au)
2022-11-16 18:41:04

*Thread Reply:* whoops let me see if I can move it, thanks.

Sachith Seneviratne (sachith.seneviratne@unimelb.edu.au)
2022-11-16 18:42:36

*Thread Reply:* Deleted as wrong channel.

Devis Tuia (devis.tuia@epfl.ch)
2022-11-17 03:00:19

💥 MEGA PROJECT and JOB ALERT: Hello everyone! We (U. Schultz, @Devis Tuia, @Blair Costelloe @Tilo Burghardt, M. Wikelski, B. Risse and many more) are happy to share that we will start early next year the project WILDDRONE (wilddrone.eu)! It is a Marie Curie Network, meaning that we will build a network of 13 PhD students :femalestudent:🧑‍🎓 across Europe around themes of Drones for conservation in Africa :zebraface:, with PhD topics from computer vision, to robotics and ecology! This also means that we need your help 🆘, dear members, to recruit.. can you help us out by sharing this link among your peers/students/friends? https://www.sdu.dk/en/service/ledige_stillinger/1200187

💯 Aleksis Pirinen, Oisin Mac Aodha, Eelke, Riccardo de Lutio, Valentin Gabeff, Casey Youngflesh, Cameron Trotter, gvanhorn, Robin Zbinden, Suzanne Stathatos, Josh Seltzer, Andrew Schulz, Carly Batist, Gaspard Dussert, Anton Alvarez, Sara Beery, Ariane Roberge
🙌 Aleksis Pirinen, Justin Kay, gvanhorn, Robin Zbinden, Ben Weinstein
🦁 Blair Costelloe, Aleksis Pirinen, Elijah Cole (Deactivated), Robin Zbinden
👍 Holger Klinck, Robin Zbinden, Jorrit van Gils
😍 Jon Van Oast, Robin Zbinden, Namrata Deka, Maxime Vidal, Nicolas Arrieta Larraza
Eelke (eelke@aeria.ai)
2022-11-17 04:19:58

*Thread Reply:* Excellent news @Devis Tuia. Congrats!

🙏 Devis Tuia
😎 Jon Van Oast
Devis Tuia (devis.tuia@epfl.ch)
2022-11-17 05:59:09

*Thread Reply:* This brings you directly to the PhD projects description: https://wilddrone.eu/recruitment/

WildDrone - Drones for Nature Conservation
👍 Aleksis Pirinen
Nick Giampietro (giampiet@pdx.edu)
2022-11-17 14:12:59

*Thread Reply:* Hi @Devis Tuia, I am not a PhD student but I am highly interested in this work for my master's thesis. Thanks for sharing. I will likely reach out to the supervisors to establish correspondence with them. Is there anything I should know before contacting the advisors for DC6, DC8, or DC10? Thanks again!

Devis Tuia (devis.tuia@epfl.ch)
2022-11-18 03:28:22

*Thread Reply:* I would say, check the different project details on wilddrone.eu so that you can have a clearer idea of what they are about and then contact people. We are hiring at the PhD level, but maybe the single institutions have capacity for extra master projects!

Maxime Vidal (mvidal@student.ethz.ch)
2022-11-18 12:02:28

*Thread Reply:* Congrats, love to see it !

Nick Giampietro (giampiet@pdx.edu)
2022-11-18 12:05:53

*Thread Reply:* I appreciate that Devis. I'll reach out to the individuals to discuss. Personally I probably won't be able to move for these opportunities at this time, so I'm mostly interested in just establishing contact and sharing our respective projects, perhaps collaborating on a paper along the way 😄 . Thanks again for the info!

👍 Devis Tuia
Ariane Roberge (arianeroberge13@gmail.com)
2022-12-07 13:44:55

*Thread Reply:* @Myriam Cloutier

👍 Myriam Cloutier
Nick Giampietro (giampiet@pdx.edu)
2022-11-17 14:22:37

Hi folks, I am working on my master's thesis at Portland State University and investigating drone-assisted afforestation. I am especially interested in restoring forest that has been hit by wildfire 🌲🔥. While I'm interested in all aspects of it, my primary focus as of now is surveying terrain pre/post seed-sowing 🌱🌱, such creating a general utility model for desirable places to plant trees, or measuring the success of a seed-sowing operation (e.g. changes to leaf-area index, normalized difference vegetation index, changes to soil moisture, or counting wildlife in the area). Composing satellite data with finer-grained drone/UAV/aerial sensor data seems likely to happen! 🤯

Currently I'm in the background research phase, so I am exploring options for a specific problem to tackle.

To that end I have a few questions: • Is anyone working on something similar? I would love to establish correspondence with some people interested in the same topics. Just looking to meet people through something besides cold-emailing paper authors 🙂 • In addition to my own Google Scholar alerts and article database searches, if anyone has seen a recent conference talk, paper, or anything like that which feels related, can you share? Not trying to crowdsource my own work, just to enrich it a bit by lightly picking brains, while I get up to speed 🙂 • Similar question as above, but any recommendations for existing datasets to work with? X‐band, C‐ band, and L‐band microwave data, aerial images of wildfire-burnt land, and post sowing seedling growth images are all useful. Also interested in thermal imaging and other multispectral data. Thanks in advance for any leads folks can share.

👍 Jose Ruiz-Munoz, Jon Van Oast, Dan Morris, Aleksis Pirinen, Adam Noach, Sara Beery, Michael Bunsen, Elaf Almahmoud
Jacob Kamminga (j.w.kamminga@utwente.nl)
2022-11-18 10:02:35

Hi all, I have funding for a small pilot project. I am looking for a postdoc who has experience in computer vision topics: detection, classification, tracking, and counting. We are looking for a unified approach and overview of state of the art and open issues for megafauna monitoring using drones, automatic nest box monitoring, and wildlife camera trapping. The funding is for 3 months, and the position is at Wageningen University (Netherlands) in collaboration with University of Amsterdam and University of Twente. You may be seconded by Wageningen at your current institution. Let me know if you are interested or may know someone!

👍 Jon Van Oast, Sara Beery, Subhransu Maji, Helena Russello, Tiziana Gelmi Candusso
Devis Tuia (devis.tuia@epfl.ch)
2022-11-18 10:03:42

*Thread Reply:* did you ask to Gert Koostra at WUR?

Devis Tuia (devis.tuia@epfl.ch)
2022-11-18 10:04:13

*Thread Reply:* @Helena Russello works in his group in case you need more info

Jacob Kamminga (j.w.kamminga@utwente.nl)
2022-11-18 10:05:50

*Thread Reply:* Thanks, I will ask!

Helena Russello (helena@russello.dev)
2022-11-18 18:03:54

*Thread Reply:* Feel free to ping me!

✅ Jacob Kamminga
Tiziana Gelmi Candusso (tiziana.gelmi@gmail.com)
2022-11-18 20:12:33

*Thread Reply:* This postdoc sounds exciting, what megafauna are you looking into?

Jacob Kamminga (j.w.kamminga@utwente.nl)
2022-11-22 09:32:31

*Thread Reply:* Hi Tiziana, sorry for the slow reply (slack didn't notify me of the updates in this thread). The WuR research group working on drones is working in African game parks so expect anything from antelope to elephants. In the Netherlands they are aiming to count geese and deer using drones. For this small pilot we want to map the synergies between 3 related topics and investigate a unified approach. We probably found a person to work on the pilot, not 100% sure yet. PM me if you want to connect 🙂

❤️ Tiziana Gelmi Candusso
Michael Bunsen (notbot@gmail.com)
2022-11-18 14:43:31

Hi all! We have a job opportunity for a frontend developer to help build interfaces for machine learning tools for an automated insect monitoring project. I would love to work with one of y'all! Or please pass on to anyone you know who may be interested.

😎 Jon Van Oast
Anselm Bradford (ans@anselmbradford.com)
2022-11-18 23:42:26

*Thread Reply:* Not directly addressing your job posting, but I notice you’re in Montreal. You should check out VT Tech Jam next year! A short drive to your south in Burlington, VT. https://techjamvt.com

Vermont Tech Jam
Est. reading time
51 minutes
Michael Bunsen (notbot@gmail.com)
2022-11-22 21:54:22

*Thread Reply:* Hey awesome, thanks @Anselm Bradford! We are actually partially funded by the Vermont Center for Ecostudies so already have one foot in Vermont!

🙂 Anselm Bradford
Anselm Bradford (ans@anselmbradford.com)
2022-11-23 00:45:53

*Thread Reply:* Oh neat! Check out https://www.caryinstitute.org/about too. Maybe relevant resources there also.

Cary Institute of Ecosystem Studies
👍 Michael Bunsen
🙏 Michael Bunsen
Luke Sheneman (sheneman@uidaho.edu)
2022-11-18 17:46:16

Waterproof paint can be used to mark animals/insects in mark-recapture studies. Does anybody have experience using mark-recapture by application of Infrared-reactive paints and then later detecting with IR cameras? If so, I would be interested in knowing more about options for IR-absorbant/reflective paint. Many thanks!

Dan Morris (agentmorris@gmail.com)
2022-11-18 19:42:56

*Thread Reply:* Out of curiosity, why the extra effort to use IR-reactive paint, instead of just visible paint? Is this a case where you expect that paint within the visible spectrum of the animal (or other animals) will have a behavioral/survival impact?

Nick Giampietro (giampiet@pdx.edu)
2022-11-20 20:52:10

*Thread Reply:* Just guessing, but it might be more reliable to blob count in a spectrum where you're unlikely to have false positives. I'm interested in hearing the reason too though (along with any potential paint recommendations)

Frederic (frederic@apic.ai)
2022-11-21 10:40:05

*Thread Reply:* We had some Ideas to use glitter as unique finger print for bees. Just spray them and the glitter pattern will be unique for every bee. The concept worked as a proof of concept. But in practice capturing the patterns on moving bees was not as easy, so we just glued markers on them. (MNIST for bees). Even though it was way way way more work, then just spray glitter on bees.

Side note: I used UV reactive paint for another use case, trace the movement of bees pollinating and could share some details if you are interested.

👍 Valentin Ștefan
Luke Sheneman (sheneman@uidaho.edu)
2022-11-21 17:13:05

*Thread Reply:* @Dan Morris, @Frederic , @Nick Giampietro - Yeah, it is for AI-controlled automated mark recapture in a controlled dark environment. We also don't want to impact normal predation by making the animal stand out more. The perfect "paint" would be IR reflective/absorbent but also otherwise invisible. Such a thing might not exist?

Although it would be an added bonus, we don't necessarily need to identify unique individuals, just recognize if we've seen the animal before. Frederic - I would be interested to know what paint you used for your bee work. Your glitter idea for bees is fascinating, but I can see how that might be challenging in practice!

Nick Giampietro (giampiet@pdx.edu)
2022-11-22 14:53:13

*Thread Reply:* Yeah, it's probably hard to find a pigment that reflects IR without also reflecting more reds as well, since spectral radiance tends to be some number of smooth curves with a peak at a given frequency and, well, IR is close to red.

This article has some useful info, and even some pigments that look close to your requirements... Can't speak to whether they would be safe to use though

https://www.pcimag.com/articles/102920-ir-reflective-pigments-a-black-rainbow-of-options

pcimag.com
👍 Luke Sheneman
Michael Bunsen (notbot@gmail.com)
2022-11-23 12:43:34

*Thread Reply:* If you find a way to mark the animal without it being visible to a non-IR camera, then you will be able to make ground-truth data for detecting individuals from normal RGB images as well.

Nicolas Arrieta Larraza (n.arrieta.larraza@gmail.com)
2022-11-27 15:09:38

Hi everyone!

I am brand new on the community thanks to @Devis Tuia. I recently watched his talk at AI for Good and I was amazed by his work. Moreover, he gave me hope and evidence that AI can definitely be used to support ecology and wildlife ❤️🌱

I recently graduated of a MSc in Data Science at the University of Twente and currently work as a ML engineer with audio data. My MSc thesis explored Deep Few-Shot Learning models for Acoustic Scene Classification.

I have always been looking for opportunities to apply AI for this specific topic. It was a few weeks ago when I found about AI for Good, and consequently about Devis' work, which has lead me here. I got to say I am unreasonably excited about the things that I have been reading on the Slack channels 😊

I am particularly interested in the use of audio data to support ecology and wildlife protection. I am currently researching on past projects, breakthroughs and resources. Do you have any recommendations? Any must that I should know/read about? Common use cases?

Lastly, I am actively looking for opportunities to contribute with my knowledge. Do you happen to know of any project that requires volunteers? (It does not have to be audio-related)

Looking forward to engage into interesting conversations and contribute in the field!

Cheers.

👋 Omiros Pantazis, Dan Morris, Sofía Miñano
Elijah Cole (Deactivated) (ecole@caltech.edu)
2022-11-27 18:22:29

*Thread Reply:* @Carly Batist @Taiki Sakai - NOAA Affiliate Might have some pointers 🙂

❤️ Nicolas Arrieta Larraza, Carly Batist, Taiki Sakai - NOAA Affiliate
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-11-27 19:27:04

*Thread Reply:* Hi! I do PAM with lemurs in Madagascar as well as some soundscape indices/analysis type stuff. I use ML models to detect lemur calls, with the huge help of my computer science expert collaborators @Emmanuel Dufourq and Lorene Jeantet! I have some materials on my website regarding PAM, including a list of good papers to look at for different use cases which might be helpful for you. You may also want to check out the Conservation Tech Directory to search for projects, tools, organizations, etc in that space (filter/search by audio, passive acoustic, etc). Happy to chat further over DM or hop on a zoom if you’d like to talk further!

conservationtech.directory
❤️ Nicolas Arrieta Larraza
Devis Tuia (devis.tuia@epfl.ch)
2022-11-28 02:55:19

*Thread Reply:* @Holger Klinck works a lot with audio as well!

❤️ Nicolas Arrieta Larraza, Carly Batist
Rita Pucci (rita.pucci85@gmail.com)
2022-11-28 11:11:40

*Thread Reply:* also, Dan Stowell is working on audio and biodiversity

❤️ Nicolas Arrieta Larraza, Carly Batist
Taiki Sakai - NOAA Affiliate (taiki.sakai@noaa.gov)
2022-11-28 13:14:07

*Thread Reply:* Hi Nicolas! Our lab uses passive acoustics to study marine mammals. Audio data is used a lot in our field since sound travels so far under water (among other reasons). AI/ML is definitely being used more and more, Dan Stowell's name has already been mentioned here but if you haven't read his review paper on bioacoustics and deep learning that's an excellent starting point.

Happy to chat if you have any marine mammal questions!

arXiv.org
❤️ Nicolas Arrieta Larraza, Carly Batist
Nicolas Arrieta Larraza (n.arrieta.larraza@gmail.com)
2022-11-28 13:30:57

*Thread Reply:* Such a warm welcoming! Thanks a lot to all of you for the info, I am going to look into detail at the resources you shared and will keep in touch 😊

Antonio Ferraz (antonio.a.ferraz@jpl.nasa.gov)
2022-11-29 20:11:13

*Thread Reply:* @Nicolas Arrieta Larraza, see my publication in the main channel posted today; we do have tons of audio analyses to do and potential future opportunities.

❤️ Nicolas Arrieta Larraza
Silvia Zuffi (silvia@mi.imati.cnr.it)
2022-11-30 12:35:08

*Thread Reply:* https://twitter.com/mmbronstein/status/1597656746767822848

🐋 Taiki Sakai - NOAA Affiliate, Nicolas Arrieta Larraza
Holger Klinck (hk829@cornell.edu)
2022-11-30 16:44:46

*Thread Reply:* Hi Nicolas, if you have any specific questions, please reach out. We are working on a bunch of marine and terrestrial projects. Here is one of them: https://birdnet.cornell.edu/ We also released a Python package for machine listening: https://shyamblast.github.io/Koogu/en/stable/ Cheers!

Jinsu Elhance (jelhance@gmail.com)
2022-11-29 13:18:47

I'm curious if anyone is working on determining the social and economic disadvantages of using AI models trained on publicly accessible data (potentially low resolution, scarce) when communities are assessing the value of protecting their land. I'm thinking particularly about pricing in Carbon Markets where higher accuracy models may allow markets to exploit land owners by withholding key information from them. If anyone has any case studies of conservation market exploitation through the digital divide I'd love to hear more.

👀 Declan, Josh Seltzer, Yseult Hb
Antonio Ferraz (antonio.a.ferraz@jpl.nasa.gov)
2022-11-29 20:02:56

Potential Project Collaboration: Hi, everyone. I’m looking for an ecologist/ornithologist with experience in bioacoustics to collaborate on a project integrating on-the-ground acoustics with synchronous NASA airborne measurements (spectroscopy and LiDAR). We developed a proof-of-concept showing significant relationships between acoustic diversity and habitat (structural and spectral) diversity. We are looking for ecologists to interpret our acoustic diversity results across habitat types and further discuss additional analyses and methods that might help to monitor (habitats and wildlife) biodiversity from ground, air and/or space.

😎 Jon Van Oast
🙌 Stephanie O'Donnell, Toryn Schafer
Antonio Ferraz (antonio.a.ferraz@jpl.nasa.gov)
2022-11-29 20:03:03

The focus is on Mediterranean ecosystems, particularly in California and the Cape Floristic Region (South Africa). We are interested in studying acoustic diversity across space, such as environmental and fire history gradients. There is also potential for temporal analysis. For example, we developed an acoustic network to collect data during springtime (March-July) in a Californian Natural Preserve. The acoustic campaign pairs with weekly NASA airborne spectral measurements provinding unprecedented spatial, spectral and temporal resolutions to measure habitat composition, phenology and plant trait variation over time. It is a unique dataset for studying how sounds and colors change during the growing season and whether they are tied.

Antonio Ferraz (antonio.a.ferraz@jpl.nasa.gov)
2022-11-29 20:03:12

We are primarily looking for faculty professors from the following universities (see below) due to the limitations of an upcoming NASA-JPL funding opportunity. Still, any discussion, insights or ideas for future collaborations are welcome; our research work goes beyond the referred funding opportunity.

Antonio Ferraz (antonio.a.ferraz@jpl.nasa.gov)
2022-11-29 20:03:15

Arizona State University Carnegie Mellon University Cornell University Georgia Institute of Technology Massachusetts Institute of Technology Princeton University Stanford University Texas A&M University The University of Texas at Austin University of Arizona University of California, Los Angeles University of Colorado, Boulder University of Michigan University of Southern California

G. Andrew Fricker (africker@calpoly.edu)
2022-12-14 00:19:24

*Thread Reply:* Hey Tony!! @Antonio Ferraz I know Cal Poly San Luis Obispo didn't make the NASA/JPL cut, but I know an ecologist here who might be able to do what you describe. I have a colleague at Cornell who might know some bird folks there. Let me know if you'd like to pursue any of those opportunities. Hope all is well.

Antonio Ferraz (antonio.a.ferraz@jpl.nasa.gov)
2022-12-15 16:03:50

*Thread Reply:* hey, how are you? yes, let’s reconnect shortly

G. Andrew Fricker (africker@calpoly.edu)
2023-02-10 22:59:48

*Thread Reply:* Hey man, doing good, busy busy. How about yourself?

G. Andrew Fricker (africker@calpoly.edu)
2023-02-10 23:00:04

*Thread Reply:* Let me know if you want to connect. im sure you probably found your biologist by now.

Ben Weinstein (benweinstein2010@gmail.com)
2022-11-29 21:47:24

Is anyone in our community familiar with bird detection in NEXRAD data? We've been just testing out some proof of concepts for detecting large bird rookeries in the everglades.

Ben Weinstein (benweinstein2010@gmail.com)
2022-11-29 21:51:07

*Thread Reply:* maybe, @Dan Sheldon

Dan Sheldon (sheldon@cs.umass.edu)
2022-11-30 08:27:11

*Thread Reply:* Haha, yes, have been working on this for years. Also with @Subhransu Maji. What kind of birds? We have an ongoing project for detecting and tracking swallow roosts (AAAI, bioRxiv). Swallows (also bats) form the most distinctive patterns on radar, but many other species have related behaviors that also show up on radar.

arXiv.org
👍 Subhransu Maji
Ben Weinstein (benweinstein2010@gmail.com)
2022-11-30 13:00:11

*Thread Reply:* Thanks. This is all very preliminary, as part of a future grant proposal, so just trying to sketch out what is possible. Briefly, the everglades restoration act is spending about a billion dollars to reform large waterways and canals in south florida. One of the metrics of success is wading bird colony population numbers. These are large aggregations of nesting birds, anywhere from several hundred individuals to 25,000 megacolonies. We cover large areas (>1000 sq km, the WCA1,2,3 in the map) using piloted aircraft and drones, and have been looking for an early warning system to be able to locate colonies so we can send aircraft there. We have solid machine learning models for the optical data and 30 years of historic data that we could associate with locations of colonies that could serve as potential supervised labels. My question with nexrad is 1) What kind of spatial resolution can we expect in the 'legacy' (not dual-pol) stations. KAMX is right next to our sites. When we see dots in the nexrad, is that an individual bird? a flock? What kind of target size is it picking up? 2) i've seen this mostly applied to martins and other species with a single very large signature at dawn. The herons/egret/ibis are larger, but the pattern will be more chaotic as they come and go from the colony many times a day. Does this seem feasible? I'm imagining a workflow for detection colonies of high activity 1) getting nextrad data, 2) traditional background subtraction to remove noise (why is there always a ring of responses around the detector?) 3) look for patterns of presence in the biological reflectance range (i've read 20-30db with low cross-correlation ratios from https://esajournals.onlinelibrary.wiley.com/doi/10.1002/ecs2.1539). 4) temporal tracking of objects to perform optical flow, 5) flow patterns coming in an out of potential colonies using a recurrent neural network (not enough temporal coverage?). Happy to jump on zoom if its easier. I'm playing around here https://github.com/weecology/everglades_radar/blob/main/Download.ipynb

Ben Weinstein (benweinstein2010@gmail.com)
2022-11-30 13:01:40

*Thread Reply:*

Subhransu Maji (smaji@cs.umass.edu)
2022-11-30 13:28:53

*Thread Reply:* Super cool. The scans in the notebook look somewhat different from some of the roosts we have been looking at (they look more like nocturnal migration). You could possibly use MistNet to separate rain from biology that works on non-dual pol data (e.g., cross-correlation is a dual-pol product). https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.13280

Ben Weinstein (benweinstein2010@gmail.com)
2022-11-30 13:31:51

*Thread Reply:* interesting. I see the cross_correlation on this station, even though it is labeled legacy. radar.fields.keys() dict_keys(['reflectivity', 'velocity', 'differential_reflectivity', 'cross_correlation_ratio', 'differential_phase', 'clutter_filter_power_removed', 'spectrum_width'])

Subhransu Maji (smaji@cs.umass.edu)
2022-11-30 13:34:10

*Thread Reply:* There were two updates to NEXRAD if I remember (one was resolution update, and another was dual-pol). This could the pre-resolution update@Dan Sheldon ?

Ben Weinstein (benweinstein2010@gmail.com)
2022-11-30 13:40:17

*Thread Reply:* Are the detections directly around the sensor artifacts? I am often seeing that pattern of heavy detections in a ring then dropping of, what's going on there?

Subhransu Maji (smaji@cs.umass.edu)
2022-11-30 14:59:31

*Thread Reply:* The radar beam points upwards at an angle — so most detections are near the radar as the migration is most intense at lower elevations.

Ben Weinstein (benweinstein2010@gmail.com)
2022-11-30 15:00:26

*Thread Reply:* thanks, so they should be seen as genuine detections, and the detection probability drops off. Rather than artifacts that should be filtered.

Ben Weinstein (benweinstein2010@gmail.com)
2022-11-30 15:01:42

*Thread Reply:* There isn't a MistNet for python? If i do get this grant, I'd be interested in porting it to pytorch. I maintain a few pytorch ecological related models -> https://deepforest.readthedocs.io/

Subhransu Maji (smaji@cs.umass.edu)
2022-11-30 15:24:32

*Thread Reply:* There is a mistnet python (not sure it’s public facing yet)

👍 Ben Weinstein
Ben Weinstein (benweinstein2010@gmail.com)
2022-11-30 15:25:04

*Thread Reply:* no rush, this is not at all pressing, just folding prelim data into grants.

Subhransu Maji (smaji@cs.umass.edu)
2022-11-30 15:27:27

*Thread Reply:* Btw, Dan and I would be happy to chat sometime. Good luck with the proposal (end of semester here, so rather busy with all that)

Ben Weinstein (benweinstein2010@gmail.com)
2022-11-30 15:31:23

*Thread Reply:* thanks, i'll leave a gif I just made here (wow matplotlib takes awhile to make animations). Thresholded reflectance above 0, cross correlation below 0.75. Definitely some action in there. Too early to say what happens, but some massive explosions around midday, that end quickly (unlikely to be weather?). Black cross are known historic bird colonies (not active every year).

Dan Sheldon (sheldon@cs.umass.edu)
2022-11-30 16:30:18

*Thread Reply:* Exciting! Big questions are the extent to which the target species show up on radar, how they appear, and the extent to which they can be separated from other sources of reflectivity, including weather and also other species. This would depend a lot on both bird behavior and radar characteristics • Behavior: how concentrated are they while roosting, how synchronized/organized are their flights, especially how high do they fly. • Radar characteristics: mostly how far are the colonies from radar, which determines how high the beam is. In your animation I see a big exodus from ~10:40–11:30 UTC (6:40–7:30 EDT), which is right around local sunrise (7:11 EDT). This looks very much like rather widespread activity of birds taking off from those areas right around sunrise. The areas are highly correlated with your colony markers but the reflectivity is widespread, i.e., coming pretty evenly from the whole surrounding area and not closely concentrated around the markers. I would guess it could include lots of different species, e.g., waterfowl, in addition to the waders. From this example it looks very encouraging that you could measure “morning bird activity in the Everglades” but depending on the mix of species and their relative abundance it’s unclear the extent to which you could measure a specific group of species. (This is only one example, of course!)

This is from 2021, so would have dual-pol data. The dual-pol upgrade was in 2012-2013.

Resolution is 250m x 0.5 degrees. The radar is sensitive enough to detect one large-bodied bird in a volume that size, but the usual situation is many birds in one volume.

Happy to chat some time.

Ben Weinstein (benweinstein2010@gmail.com)
2022-11-30 16:38:29

*Thread Reply:* Definitely, thanks for your time. I'm using these filters, i'll post one more when its done. Very slow animations building in matplotlib. I had thought it was mosquitos or non-filtered weather. I'm going to wrap this into some functions and create a sample graph throughout the year to get a sense of the 'background bird activity' in the everglades and see if we see any additional boom as the birds come in. gate_filter.exclude_below("reflectivity",5) gate_filter.exclude_above("reflectivity",30) gate_filter.exclude_below("differential_phase",30) gate_filter.exclude_above("cross_correlation_ratio",0.8) based on figure from https://esajournals.onlinelibrary.wiley.com/doi/10.1002/ecs2.1539

Dan Sheldon (sheldon@cs.umass.edu)
2022-11-30 16:39:51

*Thread Reply:* By the way if you just want to view the data, NOAA’s weather and climate toolkit is the easiest way. It’s a bit arcane but has good functionality, allows you to browse the inventory, select files, make animations, etc.: https://www.ncdc.noaa.gov/wct/install.php.

ncdc.noaa.gov
👍 Ben Weinstein
Dan Sheldon (sheldon@cs.umass.edu)
2022-11-30 16:45:06

*Thread Reply:* For thresholding with dual-pol, people usually just use correlation coefficient < 0.95 = biology, otherwise weather.

Olof Mogren (olof.mogren@ri.se)
2022-11-30 04:33:59

A well-known figure here, @Sara Beery is giving a talk in our Learning Machines Seminar tomorrow Thursday, at 15:00 CET. Auto Arborist: Towards Mapping Urban Forests Across North America. The seminar is free and open to all, and you just connect using Zoom, no registration necessary! For info about upcoming seminars, there is an optional mailing list. https://www.ri.se/en/learningmachinesseminars/sara-beery-mit-urban-forest-mapping

RISE
:thumbsup_all: Frederic Fol Leymarie, Andrew Schulz, Aleksis Pirinen, Toryn Schafer, Nicolas Arrieta Larraza, Valentin Gabeff, Jason Holmberg (Wild Me), Jose Ruiz-Munoz, Yseult Hb
👍 Shir Bar, Aleksis Pirinen, Taiki Sakai - NOAA Affiliate, Jason Holmberg (Wild Me), Victor Anton, John Martinsson
👍:skin_tone_3: Pen-Yuan Hsing
😎 Jon Van Oast, David McClosky, Jason Holmberg (Wild Me), Eddie Zhang
🌲 Ben Weinstein, Jason Holmberg (Wild Me), Yseult Hb
Olof Mogren (olof.mogren@ri.se)
2022-11-30 04:36:22

*Thread Reply:* We also have a series of recordings of previous seminars on youtube, with speakers such as @Devis Tuia and Frederik Kratzert which are definitely worth checking out: https://www.youtube.com/watch?v=eP79uTWTTjY&list=PLqLiVcF3GKy1tuQFoDu5QKOM6S33t_4R1&index=8&t=1063s https://www.youtube.com/watch?v=Wrv-IS3wf80&list=PLqLiVcF3GKy1tuQFoDu5QKOM6S33t_4R1&index=1

YouTube
} RISE Research Institutes of Sweden (https://www.youtube.com/@RiSeSweden)
YouTube
} RISE Research Institutes of Sweden (https://www.youtube.com/@RiSeSweden)
👍 Aleksis Pirinen, Omiros Pantazis
Olof Mogren (olof.mogren@ri.se)
2022-12-08 04:01:12

*Thread Reply:* For those of you who missed this very interesting talk by @Sara Beery, the recording is on youtube! https://www.youtube.com/watch?v=Ob7XfUPmKu4&list=PLqLiVcF3GKy1tuQFoDu5QKOM6S33t_4R1&index=1&t=891s

YouTube
} RISE Research Institutes of Sweden (https://www.youtube.com/@RiSeSweden)
❤️ Sara Beery
Majid Mirmehdi (m.mirmehdi@bristol.ac.uk)
2022-12-01 12:27:56

AI for Nature Conservation - Fully funded PhD positions at Bristol University and other member institutions in the Wilddrone.eu project! See https://wilddrone.eu/recruitment/

WildDrone - Drones for Nature Conservation
❤️ Lucia Gordon, Josh Seltzer, Devis Tuia
🐘 Blair Costelloe
Devis Tuia (devis.tuia@epfl.ch)
2022-12-02 02:29:33

*Thread Reply:* of someone needs information about the project (we have 13 phd projects after all!), the application process, etc, please reach out to me, @Blair Costelloe or @Tilo Burghardt !

Ameya Patil (ameyapatil249@gmail.com)
2022-12-02 08:11:02

Hello everyone! I am Ameya Patil, PhD student at the University of Washington, Seattle advised by Dr. Leilani Battle. My research area is in databases and data visualization, along with a motivation to work towards environmental health. Accordingly, I am working on building interactive data analytics systems for environmental science data or contexts - weather/climate, animal tracking, waste management, sustainability, etc. I would be happy to chat if anyone has any such datasets for which they would like to build a exploration and analysis interface. Also, I am looking for internships for Summer 2023 in the same domain, so I would appreciate any leads on that front!

👋 Jon Van Oast, Dan Morris, Sara Beery
👏 Vanessa Suessle
Wenqi Su (wenqi.su@mcgill.ca)
2022-12-03 19:21:36

Hi everyone! My name is Wenqi Su. I am currently majoring in computer science and statistics at McGill University. I am super interested in the socially impactful application of computer science and actively looking for research opportunities to get involved in this domain as early as possible. I am currently interning as a data scientist in industry and working as a RA and TA for computer classes at McGill. I already took ML and AI courses and have a solid understanding of the core concepts. I am eager to discover how to apply theoratical knowlegde to application through research. If anyone need extra help for their projects/research and is willing to bring me in, trust me I will only be helful and not a burden even though I am just an undergrad!!

❤️ Carly Batist, Sara Beery, Eddie Zhang
👍:skin_tone_3: Pen-Yuan Hsing
👍 Kangyu Zheng, Michael Bunsen
:thumbsup_all: Frederic Fol Leymarie
Kostas Papafitsoros (k.papafitsoros@qmul.ac.uk)
2022-12-05 10:22:17

I have a fully funded PhD position in Queen Mary University of London, on the topic “Data-driven Image Processing Methods with Applications to Wildlife conservation”. Deadline: 31 January 2023. I am happy to answer any questions that you might have 🙂 https://www.qmul.ac.uk/maths/postgraduate/postgraduate-research/phd-projects/current[…]rocessing-methods-with-applications-to-wildlife-conservation/

👍 Sara Beery, Lukas Picek, Alessandra Sellini, Malte Pedersen, Alexander Robillard, Carly Batist, Vanessa Suessle
🙌 Omiros Pantazis
🎉 Lukas Picek, Alexander Robillard, Aleksis Pirinen
karen bakker (karen.bakker@ubc.ca)
2022-12-06 09:48:26

Just wanted to share some exciting news: my new book, which explores the implications of AI for analyzing non-human communication, has just been published: q637tMwJW6jZ4l18k5TrCW2b50qJ376y-6VNP8cg2xJY44W7w3DJs1TrvxgVd0QZM7b4gCLW1dxVfn1H0SwFW2PdTNG6jQmWcW94nLvf8zRdtBTjT95365fLW3WNWsf4B-fJYW62rsDP2pXwxCW642gnM6kGYXcW51rYPV6j3W-W2Hdftf8V3k8qW23qJh53DxCCgN1XGjqLx1tRW1d6Dn34FvNnwW8Fj8Bv3QzjW-W3nCmJ37Z3KQJW5TF2hr3gVqw4W90y9mb8K7n9W94zDgs45HPQfW3VbKH6pPQ_8W7lcqS17dxFj8W31XQZF91fCfbW1Jf9GV76zhzGW4QhTrP6hCXGbW1NpHh01v431sW2cc17R1R04l3W6mggJh40XrSWW6Q8lvR5HkTLr35F61|The Sounds of Life: How Digital Technology is Bringing Us Closer to the World of Animals and Plants> (Princeton University Press, October 2022).

The book synthesizes decades of research (including work by the members of the AI for Conservation community), explores applications to conservation, and discusses potential implications for digitally-mediated interspecies communication. The book got a full page review in https://d147yZ04.na1.hubspotlinksfree.com/Ctc/V+113/d147yZ04/VW6n6F6B1CpdW1bl8qN610YTsW5tZZwQ4TbNvTN8hWs4J3lSc3V1-WJV7CgPsnW4bDX1T9jnQv2N2M8Z4ns8DYZW3bTP4K1KbFmGN7lnYg39gX9QW2sFt1s3DCKTjW7zxl3f7lkYphW5s9yWq88LwnVW4c9nsl7Z3S0gW48xBQh6M5Tv6W6Bh4wl1RZfkQW6MxSMT8w74XsW2vKv0-6HM49NW7Wx3Z61MD6fZVbdHqn6w0r5W5jwYlN3RSfJrW5nrXvV63Q1JhW4BTMmS42f4yQW32Glcr9hFvJfM6mW6MgG3t1W7pbMrJ93pS9DN5vBKhbfK4hxW3Y0K3l4GgzZGN7Qrtbt4RkysW8kVbfT6hdH4k3mzD1|Science, was covered in 0W1YSLNJ7HCkrhW7QNdfF8TsY97N2WnFCwRY4QcW53V-tg6YyRNyW3LpYRZ68yTm2W6x2DsW49bstwW86pc4G6XpynvW36Q4Z53TmNJ5VfDL7M2-G05XW8D4B158JXXgbW7wwwXn8Ft4jBW999Tb5yzJXSVYSFdw4vHXZHW6q7tYg6qz30N4mY8lFcyXYnN10xlt0N9CR3W1np1qc2Nm7xkW8Xkwf-8fSJyHVN-gy5zn9hmW44XVwk3qxkSW3fJtwS7tNJ1hN4Lw9rxyrp7YW62Q5rt8wxBgQW3BxDg5vfN17W2cNx5d8bGbHhW2LNsmN4ZyQh9W7GPtv5rrWPHW8H0zR63ftDs33dxt1|The Guardian>, and was chosen as the book of the month by the 3lScmV1-WJV7CgBc9W2R7c012flzXRW25yPhk4LKvKQW5sv-4c2LxchFW2nBq-63p7qBMN4W-GjWSnSDlW2GMdlg4qZGNVW8vfkc57sDjQhW4zwVLq8v11drW64STQz3DYxK9W4clM2R2TD9MDW8L1Px-4QPWvxW8GF2wy4ZKDDfW6mKdh81YHdLpW6L-V4N1HwZyWN8Nnr0Gr-ptKW4gHm1293MbqTN53FCkxS580xW20ZTh2vhQQNW58cwqk28H1jFN7h2DMkFckWyW1_4yZb9c0Y65W2shx334FJLQNW2M0Zfq2Qq4G9W7FtJlF5wg2l9W22Rz9m6ygVSHW4FX3Rl5DmgKC3kVb1|NPR Science Friday Book Club>.

If anyone is interested in learning more or would like to host a talk or webinar, please let me know!

🙌 Stephanie O'Donnell, Josh Seltzer, Declan, Robin Zbinden, Sara Beery, Viktor Domazetoski, Katelyn Morrison, Talia Speaker, Yseult Hb, Ted Schmitt, Jason Holmberg (Wild Me), Matt Weldy, Drew Blount, Lindsey Dukles, Ameya Patil
👍 gvanhorn, Omiros Pantazis, Justin Kay, Katelyn Morrison, Jason Holmberg (Wild Me)
😎 Jon Van Oast, Jason Holmberg (Wild Me)
Sankaran (shun-ka-run) (sankaranv@cs.umass.edu)
2022-12-06 16:53:29

Hi everyone! My name is Sankaran, and I am a PhD student at UMass Amherst working in causal inference, probabilistic ML, and RL. I am looking for ways to apply ideas from these fields to problems in human-elephant conflict and broadly in wildlife conservation. Excited to be involved in this group (thanks to @Elizabeth Bondi)! From the few papers I have read in this field, the idea of trying to approach conservation problems on my own felt very daunting, and I feel lucky to have found this community. By reading all of your work, I am hoping to learn more about how wildlife conservation problems can be cast as AI problems and how data from the field is collected, so I can accordingly direct my future research!

I am hoping to contribute to projects related to elephant conservation or human-animal conflict, and would also appreciate any pointers, readings or recommendations for how I can get started! I’m also happy to chat about anything related to causal inference and experiment design, if there ways you think they can be useful in conservation problems.

👋 gvanhorn, Elizabeth Bondi, Josh Veitch-Michaelis, Katelyn Morrison, Nicolas Arrieta Larraza
Anselm Bradford (ans@anselmbradford.com)
2022-12-06 18:17:47

*Thread Reply:* Are you set on elephants? I’m wondering if maybe there could be large animal conservation projects regionally closer to Umass that might make the logistics easier for you. For instance the Wolf Conservation Center in South Salem, NY, ~2hr drive from you. I have no affiliation, but just a thought of something to look into. There might be similar regionally-relevant issues to explore in regard to bobcats, coyotes, bears, etc. that conflict with human activities.

Sankaran (shun-ka-run) (sankaranv@cs.umass.edu)
2022-12-06 18:23:22

*Thread Reply:* I think elephants were where I started looking into wildlife conservation (from growing up in South India) but I'm not set on it! These are actually really great, having resources and expertise in proximity is very helpful when I'm still new.

👍 Anselm Bradford
Amber De Neve (adeneve@umass.edu)
2022-12-12 16:01:39

*Thread Reply:* Hi Sankaran- I am a PhD student at UMass Amherst too! I actually work on plant monitoring using ML, but happy to chat if you ever want to brainstorm stuff with a biologist

Timm Haucke (timm@haucke.xyz)
2022-12-07 12:37:41

Hi everyone, there is a new camera trapping dataset on lila.science: Lindenthal Camera Traps. This dataset was captured in a wildlife park by a stereo camera, which provides not only color / IR, but also distance information. Feel free to contact me if you have any questions about the dataset. Thanks @Dan Morris for the hosting and help!

LILA BC
Written by
lilawp
Est. reading time
2 minutes
🎉 Jon Van Oast, Josh Veitch-Michaelis, Dan Morris, Peter Bull, Rowan Converse, Alexander Robillard, Ando Shah, Viktor Domazetoski, Sara Beery, Carly Batist, Kakani Katija, Vanessa Suessle, Tiziana Gelmi Candusso, Sofía Miñano
👍 Atul Ingle, Rowan Converse, Alexander Robillard, gvanhorn, Jose Ruiz-Munoz, Peter van Lunteren, Vanessa Suessle, Ștefan Istrate, Shir Bar, Rita Pucci
💯 Valentin Gabeff
👍:skin_tone_3: Pen-Yuan Hsing
Peter Bull (peter@drivendata.org)
2022-12-07 12:54:59

*Thread Reply:* Awesome, congrats on the release @Timm Haucke!

🙌 Timm Haucke
Ando Shah (ando@berkeley.edu)
2022-12-07 15:13:50

Hi folks! Im a second year PhD student at UC Berkeley, relatively new to conservation, and applying ML to conservation problems (currently developing deep learning SDMs) that can be operationalized quickly. Im wondering:

  1. What are some of the journals / conferences that computer science folks working in conservation and conservation policy should be paying attention to?
  2. How do most folks deal with the interdisciplinary nature of this work - where do they feel their academic home community is - is it in their original domains (CS, Conservation Biology, Policy, etc), or is there a new interdisciplinary home emerging?
😍 Sara Beery, Rebecca
👍 Carly Batist, Peter van Lunteren, Carl Boettiger, Shir Bar
Katelyn Morrison (kcmorris@andrew.cmu.edu)
2022-12-07 15:15:05

*Thread Reply:* ACM COMPASS is one conference that is relevant https://compass.acm.org/. I've also seen a lot of CompSust work shared at AAAI and IJCAI venues

🥳 Ando Shah
👍 Boyu Zhang
Carl Boettiger (cboettig@berkeley.edu)
2022-12-07 15:57:48

*Thread Reply:* Hey @Ando Shah, nice to see you! On the conservation side, high-profile work is still concentrated in Science/Nature/PNAS which all obviously have interdisciplinary readership, though there are many discipline specific journals like Conservation Letters, Conservation Biology, or OneEarth (the relatively new flagship Cell press journal in this area). Conference-wise, AGU is a fair-sized conference and probably a nice venue for such work (long a focus of geospatial informatics), though this would be well-received at conferences like ESA and narrower ones like ISEC https://imstat.org/meetings-calendar/the-6th-international-statistical-ecology-conference/.

🙌:skin_tone_5: Ando Shah
Dan Morris (agentmorris@gmail.com)
2022-12-07 20:10:44

*Thread Reply:* This sort of the opposite of what you asked about, but Methods in Ecology and Evolution has been very friendly to papers about new ML techniques for conservation data processing:

https://besjournals.onlinelibrary.wiley.com/journal/2041210x

If I had to generalize based on what I've seen, I would say journals like Methods in E&E that are interdisciplinary but "ecology-first" have a high bar for usefulness, but don't need to see algorithmic innovation; CS venues like AAAI/CVPR/etc. have an inherent high bar for innovation but are more comfortable with domain-specific case studies that aren't necessarily practical yet (i.e., "our technique could be applied to this interesting conservation problem").

😎 Carl Boettiger
❤️ Emilio Luz-Ricca, Ando Shah
Rebecca (rebeccayap92@uchicago.edu)
2022-12-08 00:49:25

*Thread Reply:* Not an answer to your question, but I love the second question especially because I am dealing with this and trying to pivot and wondering what degree to place myself in.

Ando Shah (ando@berkeley.edu)
2022-12-09 13:49:44

*Thread Reply:* Thank you all for those super helpful thoughts and insights!

Olof Mogren (olof.mogren@ri.se)
2022-12-08 02:16:01

Today, William Lidberg from Swedish University of Agricultural Sciences is giving a talk on AI-based forest landscape mapping using remote sensing. 15:00 CET on Zoom. https://www.ri.se/en/learningmachinesseminars/william-lidberg-slu-geographical-intelligence

RISE
🙌 Aleksis Pirinen, Josh Seltzer, Michael Bunsen, Jon Van Oast
😍 Sara Beery
🙌:skin_tone_5: Ando Shah
Sean Nachtrab (sean.nachtrab@gmail.com)
2022-12-08 10:38:14

*Thread Reply:* This was interesting to listen to! Thank you

Olof Mogren (olof.mogren@ri.se)
2022-12-09 07:42:01

*Thread Reply:* Good to hear @Sean Nachtrab! For those of you who missed this inspiring talk about an important application of state-of-the-art AI (maps of soil and wetlands using LIDAR data and machine learning), it's available on youtube! https://www.youtube.com/watch?v=Ezwgsjh2Oh8&list=PLqLiVcF3GKy1tuQFoDu5QKOM6S33t_4R1&index=1

YouTube
} RISE Research Institutes of Sweden (https://www.youtube.com/@RiSeSweden)
Vanessa Suessle (vanessa.suessle@h-da.de)
2022-12-08 07:55:04

Hello all. I am a PhD student from Darmstadt, Germany and I just started.

My topic is 'AI applications for wildife conservation' with a focus in Computer Vision and Individual Identification for fur-patterned species.

During my master's thesis I tried a first approach to automatically identify leopards from video data without pre-labeled data. My plan is to build up on this and make it more robust and useable.

I am always happy to meet new people and a have a chat on CV and AI in wildlife conservation.

👋 Timm Haucke, Josh Seltzer, Jose Ruiz-Munoz, Nicolas Arrieta Larraza, Jason Holmberg (Wild Me), Taiki Sakai - NOAA Affiliate, Katelyn Morrison
Josh Seltzer (jyseltz@gmail.com)
2022-12-08 08:17:07

*Thread Reply:* Welcome Vanessa! I've just started working on a project to identify jaguars and other Central American felines in video data, so I'd love to connect some time 🙂

👍 Vanessa Suessle
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2022-12-08 12:26:13

*Thread Reply:* Hi @Josh Seltzer and @Vanessa Suessle At Wild Me we have data and benchmarks that might help for jaguars, leopards, cheetah, and snow leopards. Imagery not video. But happy to collaborate on this.

🙌 Josh Seltzer, Taiki Sakai - NOAA Affiliate
Josh Seltzer (jyseltz@gmail.com)
2022-12-08 13:20:29

*Thread Reply:* Hey @Jason Cremerius that sounds awesome! I had been looking at the Leopard ID 2022 dataset and wondering how well that might support transfer learning to jaguars, it'd be a pleasure to collaborate 🙏

👍 Jason Holmberg (Wild Me)
Vanessa Suessle (vanessa.suessle@h-da.de)
2022-12-09 03:59:08

*Thread Reply:* Hi @Josh Seltzer and @Jason Holmberg (Wild Me) I would be very happy to collaborate as well. The ideas sound great. I would love to chat with you on those topics. I am in CET timezone. Feel free to suggest a time and date.

Josh Seltzer (jyseltz@gmail.com)
2022-12-16 17:57:44

*Thread Reply:* @Vanessa Suessle hey, sorry for the delayed response! I am pretty flexible next week, I can do 9am (15:00 CET) or any time around then any day except Thursday. Let me know how that sounds, and @Jason Holmberg (Wild Me) it'd be great if you could join and/or we could fill you in after.

Cheers and have a great weekend 🙇

Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2022-12-20 19:36:26

*Thread Reply:* I am available to meet his week or early in the new year. Let me know.

Vanessa Suessle (vanessa.suessle@h-da.de)
2023-01-03 10:45:56

*Thread Reply:* Hello, I am so sorry. I missed the notification of the message. Suggest any time for this or next week I can make it work.

Vanessa Suessle (vanessa.suessle@h-da.de)
2023-01-27 04:26:56

*Thread Reply:* Hello all, I would still be interessted to meet and have a chat. In the upcoming weeks I dont have much fixed meetings and could arrange to meet at almost any time.

Sorry that my first reply took so long. I am kinda new on Slack and missed notifications. Vanessa :)

Burak Ekim (burak.ekim@unibw.de)
2022-12-08 10:09:40

Hey everyone! I am Burak, a PhD student from Munich, Germany. I am working on a project in which we (with Michael Schmitt and Ribana Roscher) raise the open question of what makes nature wild for better understanding of wilderness areas and, ultimately, the concept of wild. I mainly use explainable machine learning, uncertainty quantification, and data fusion methods to get the most out of multi-modal earth observation data. 📡🌲 Feel free to ping me and check my website for more information.

🌲 Nico Lang, Josh Seltzer, Jason Holmberg (Wild Me), Katelyn Morrison
👏 Vanessa Suessle, Andrzej Białaś
👀 Andrzej Białaś
👍 Jan Kees
Josh Seltzer (jyseltz@gmail.com)
2022-12-08 12:48:30

*Thread Reply:* Sounds really cool! I'm taking a look at MapInWild but since I don't think it's addressed there (maybe I am mistaken), I'd be really curious to hear your thoughts on the temporal aspect of what it means to be 'wild'. For example, in Latin America there seems to be a common trend of "reforestation reversals" where reforestation efforts are only sustained for a few years, before land is again repurposed for other uses. I imagine it must be very difficult to account for these complexities.

(Not trying to poke any holes - I am new to forest ecology, so just really curious if/how these kinds of complications are considered!)

Frontiers
Burak Ekim (burak.ekim@unibw.de)
2022-12-08 14:32:04

*Thread Reply:* Thanks Josh! Yes, in this paper we present our initial efforts in addressing the task of wilderness mapping by forming both sensitivity analysis and semantic segmentation setups on spatial/spectral levels. Although I have not addressed the task from that perspective yet, temporal aspect could introduce a new dimension where discriminative patterns are more noticable and provide valuable hints that are covered otherwise, I would say.

Yes, undisturbance is a vital criteria for wilderness areas in both gradients (spatial and spectral). Probably not what you want to hear but I would approach the reforestation reversal effects from the CV/ML and formalize an optimisation task (I would simply put it on the shoulder of a learner).

A side note: While doing that, I would also not create a strong connection between wilderness and forested areas (as wilderness areas can be found in all kinds of ecological/climate zones).

Ethan Shafron (ethan.shafron@gmail.com)
2022-12-09 14:51:20

*Thread Reply:* Sounds intriguing! As a side note, there is a ton of literature on this from indigenous and political science perspectives, which I think can be very useful to contextualize the concept of wilderness/wildness in your work. Also, as a result of the "Wilderness Act" in the US, there are actually a lot of documents that outline quantitative metrics for managing wilderness areas here, even if the concept itself is more of a narrative goal for managers. In many ways, these metrics are viewed as loss functions - things to minimize in order to maintain "wilderness character". See below - cheers!

https://www.pnas.org/doi/epdf/10.1073/pnas.2022218118

https://wilderness.net/practitioners/toolboxes/wilderness-character/default.php

https://d1wqtxts1xzle7.cloudfront.net/75523678/Indigenous20Knowledges-libre.pdf?1638420131=&response-content-disposition=inline%3B+filename%3DIndigenousKnowledgesandthePoliticso.pdf&Expires=1670618615&Signature=fWDkqyorhBstnd-RdVa65WCwAQ7RYcOBgQdkTdPWWSbqRiI15dic~-1XlSVFMOYF8owYrTsaY74bGZb19i~3zxTTatsWpkevLYwTTU1okyJpiiZbpHhUSBG6NiIDtG9yKBn1~H5pHhmOrPh~WMnmpYVT~cvkTPW6YzIVRERLokNP4wFdAgJA-7ZtvwfoF2q-8eL5IoBi5iea2BXZ5u7YWwd2FpDPXtnVJ5F-AeH25EERBuLZWzotvDTGNu~KgVL0-23VVqEKE9UFR5IxiQPc2qrI-du5LQGde8RZ4T15RPYk5rCnAbbOuaLbikq6V1-E69aJrrzlJDn2Fdx8D-NWPA&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA|https://d1wqtxts1xzle7.cloudfront.net/75523678/Indigenous_20Knowledges-libre.pdf?1638420131[…]JrrzlJDn2Fdx8D-NWPA&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA

Burak Ekim (burak.ekim@unibw.de)
2022-12-10 13:09:02

*Thread Reply:* Great sources and perspectives Ethan, thanks a bunch for your input! Exactly, wilderness areas are home to many indigenous communities and we should do our best to keep those areas undisturbed. But, as a cultural construct that changes in definition from country to country, it is not really easy to delineate those areas from space. That is why I find the studies on the quantification of wilderness management quite valuable for our investigation.

Graeme Phillipson (graeme.phillipson@bbc.co.uk)
2022-12-08 13:10:15

Could I ask, what is the state of the art for position tracking of animals in urban environments? Especially if you wanted to track a large number? (edited: referring to geographical position data)

Josh Seltzer (jyseltz@gmail.com)
2022-12-08 13:15:21

*Thread Reply:* Do you mean based on ground-level images?

Graeme Phillipson (graeme.phillipson@bbc.co.uk)
2022-12-08 13:52:37

*Thread Reply:* I meant on there positions, like GPS data, but yes at ground level.

Tiziana Gelmi Candusso (tiziana.gelmi@gmail.com)
2022-12-08 15:38:37

*Thread Reply:* Do you mean which gps-collars? What type of animal?

Tiziana Gelmi Candusso (tiziana.gelmi@gmail.com)
2022-12-08 15:43:10

*Thread Reply:* GPS have out faced VHS as the technology has gotten better and relatively cheaper. GPS collars are more convenient. There other mechanisms to track animals too, one was being developed lately, the ICARUS initiative using a specific satellite but Russia cut the chord lately so we’ll see what happens there. My advice is search “telemetry” and your organism of interest and see what have people have used in the latest literature. I think someone made a tracker with raspberry PI too, or maybe i dreamed it but so far GPS are the go-to protocol especially for mammals at least. Good luck!

👍 Graeme Phillipson, Sara Beery
Tiziana Gelmi Candusso (tiziana.gelmi@gmail.com)
2022-12-09 13:19:44

*Thread Reply:* Here is a really cool paper from this year on all telemetry related to animal movement and ecology. It's a good starting point.

Tiziana Gelmi Candusso (tiziana.gelmi@gmail.com)
2022-12-09 13:26:48

*Thread Reply:* I just noticed you wanted to track specifically in urban environments, when it comes to tracking devices for terrestrial animals, your main concern will be body weight and budget. Landscape wont have a big impact on your choice. The ones we used were Lotek, and they were pretty good, some stayed on for 1-2 years on urban coyotes. https://www.lotek.com/

Lotek |
Graeme Phillipson (graeme.phillipson@bbc.co.uk)
2022-12-11 08:24:42

*Thread Reply:* Thanks @Tiziana Gelmi Candusso!

❤️ Tiziana Gelmi Candusso
Pen-Yuan Hsing (penyuanhsing@posteo.is)
2022-12-10 17:02:55

Quick question: Is there any work done on automatic identification of butterflies in images, either identifying species and/or even unique individuals? I was recently looking at some work on butterfly observations and it piqued my curiosity.

Josh Seltzer (jyseltz@gmail.com)
2022-12-10 17:11:25

*Thread Reply:* Hey @Pen-Yuan Hsing! I saw this shared on Twitter last week—I am guessing there is a lot of other work dealing specifically with species/individual identification, but it seems like there is very robust/comprehensive imaging techniques available, in case you're interested: https://www.nature.com/articles/s42003-022-04282-z

Nature
Katelyn Morrison (kcmorris@andrew.cmu.edu)
2022-12-11 03:24:53

*Thread Reply:* WildMe https://www.wildme.org/ doesn't do this for butterflies yet, but I bet their pipeline could do it given some initial images to set it up. WildMe does species identification and identifies unique individuals for species like whales, turtles, amphibians, and various carnivores 🙂. You might want to look into the Hotspotter Algorithm for individual identification.

wildme.org
Yves Bas (yves.bas@gmail.com)
2022-12-11 16:38:33

*Thread Reply:* INaturalist is pretty good for butterflies auto id in a large part of the world: https://www.inaturalist.org/

inaturalist.org
🦋 Alex Borowicz, Katelyn Morrison
Pen-Yuan Hsing (penyuanhsing@posteo.is)
2023-01-17 12:33:12

*Thread Reply:* Sorry for my late response here, thanks for the suggestions I'll take a look! I have a butterfly colleague to whom I will also share this.

Katelyn Morrison (kcmorris@andrew.cmu.edu)
2023-01-17 12:34:08

*Thread Reply:* Hey! I am talking to a group at Mila who is working on this - let’s touch base soon? They are also in this slack actually!

Pen-Yuan Hsing (penyuanhsing@posteo.is)
2023-01-17 12:35:53

*Thread Reply:* Hi @Katelyn Morrison cool! The colleague I know is Denise Dalbosco Dell'Aglio (part of a group at the Univ of Bristol where I happen to also be, but part of a unrelated project) who has a mountain of butterfly footage from Panama. Do you know them?

Pen-Yuan Hsing (penyuanhsing@posteo.is)
2023-01-17 12:36:01

*Thread Reply:* I'd love to touch base about this!

Pen-Yuan Hsing (penyuanhsing@posteo.is)
2023-01-17 12:36:38

*Thread Reply:* My background is in ecology/conservation but not butterflies specifically, but would be interested if we can make something interesting out of this.

Katelyn Morrison (kcmorris@andrew.cmu.edu)
2023-01-17 12:37:18

*Thread Reply:* Ohhh awesome! I don’t know them, but yes I’d love to touch base. I am starting up a study to understand how lepidopterists collaborate with XAI to determine the ID of the butterfly. My background is computer science and human-AI collaboration. :)

Pen-Yuan Hsing (penyuanhsing@posteo.is)
2023-01-17 12:37:51

*Thread Reply:* That's so cool! Lemme reach out to Denise to see if I can get a few sample clips to share.

🙌 Katelyn Morrison
Andrzej Białaś (andrzej@appsilon.com)
2022-12-12 07:50:18

👋 Hello everyone! I’m Andrzej Białaś and I am the Data4Good Lead at Appsilon. I joined this Slack community a while ago and mostly lurked around. The intro is well overdue!

Happy to connect with others working on using technology for the benefit of planet & people (Add me on LinkedIn or DM here).

Some more info about me and our work: Appsilon, delivers high-quality data analysis and technical support to help our clients make the most of their data. We specialize in building advanced enterprise R Shiny applications, and we have a passion for using our skills to make a positive impact on the world.

In my work at Appsilon, I oversee our Data4Good program, which realized that mission. One of our keystone projects coming from the Data4Good initiative is Mbaza AI - an open-source algorithm that allows rapid biodiversity monitoring at scale.

PS: I am excited to learn from and collaborate with all of you in this community! PPS: I am much more responsive on LinkedIn as I have only a handful of Slack channels on my mobile device).

👋 Viktor Domazetoski, Jose Ruiz-Munoz, Carly Batist, Katelyn Morrison, Dan Morris, Lucia Gordon, Sara Beery, Jon Van Oast, Amber De Neve, Eddie Zhang, Timm Haucke, Lindsey Dukles, Mikey Tabak, Anton Alvarez, Yseult Hb
Austin Greene (austin.greene@whoi.edu)
2022-12-13 09:19:00

Hey all! My name is Austin Greene and I am a postdoctoral investigator at Woods Hole Oceanographic Institution. I mostly work on coral reefs and other threatened marine habitats as a disease ecologist by training. Over the past few years I started building low-cost camera systems (CoralCam and KiloCam) and that brought me into the world of conservation technology. I'm hoping to develop a new iteration on these that has embedded ML and am keen to work on any other projects people have that make conservation tech more accessible to those with the least resources. Nice to meet you all!

👍 Aleksis Pirinen, Fagner Cunha, Timm Haucke, Alexander Robillard, Sam Kelly, Katelyn Morrison, Eddie Zhang, Justin Kay, Jose Ruiz-Munoz, Declan, Sara Beery, Lindsey Dukles, Armi Tiihonen, David Will, Valentin Ștefan, Talia Speaker, Anthony Bao, Andrew Schulz, Yseult Hb, Henrik Cox (Sentinel)
👏 Silvia Zuffi, Lindsey Dukles, Lucia Gordon, Ted Schmitt, Toryn Schafer
👋 Sara Beery, Atul Ingle, Viktor Domazetoski, Jon Van Oast, Carly Batist, Ed Miller
👍:skin_tone_3: Pen-Yuan Hsing
Josh Seltzer (jyseltz@gmail.com)
2022-12-13 09:20:31

*Thread Reply:* This sounds really cool! Welcome @Austin Greene 👋

Alexander Robillard (RobillardA@SI.EDU)
2022-12-13 09:31:59

*Thread Reply:* Sounds awesome!

Austin Greene (austin.greene@whoi.edu)
2022-12-13 14:36:27

*Thread Reply:* Thanks so much for the warm welcome!

Valentin Ștefan (valentin.stefan.vst@gmail.com)
2022-12-13 15:31:50

*Thread Reply:* Hi Austin, I am interested in such custom cameras for pollinator monitoring. We wanted to order some Raspberry Pi gear and follow the guidelines of @Maximilian Sittinger, but I could't find any available on the market at the moment. I am therefore interested in alternatives / other microcomputers. Could you share some info with me about your models? How easy are the components to order, costs, put together and add custom code on them?

Carly Batist (cbatist@gradcenter.cuny.edu)
2022-12-13 15:53:47

*Thread Reply:* You should talk to the Sentinel team at ConservationX! They’re working on ML-enabled camera traps/edge AI with Edge Impulse. Edge Impulse has also been working with the Audiomoth folks to develop edge computing for acoustic recorders. You could also check out other resources related to this through the Conservation Tech Directory (KiloCam is already in there 🙂)

👍 Valentin Ștefan
Ed Miller (ed@hypraptive.com)
2022-12-14 16:49:06

*Thread Reply:* Tagging the Sentinel team: @Henrik Cox (Sentinel), @Sam Kelly

👍 Carly Batist
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-12-14 19:25:38

*Thread Reply:* totally forgot to do that 🤦‍♀️. thanks Ed!

👍 Ed Miller
Henrik Cox (Sentinel) (henrik@conservationxlabs.org)
2022-12-14 21:01:56

*Thread Reply:* Thanks @Ed Miller and @Carly Batist! This looks fantastic @Austin Greene, it would be great to see how we can cross wires. I'll DM

👍 Ed Miller
sam maxwell (maxwellsamjm@gmail.com)
2022-12-13 16:01:54

Hi all, I am Sam and work on a partnership research project with SUEZ and the LaBRI (Computer Science Research Lab of Bordeaux) in France on the case of Sound Event Detection and Underwater Acoustic. This project aims to develop an automatic system to process and analyze acoustic data from underwater environment monitoring, like marine and freshwater ecosystems, but also in the case of sanitary systems environment. Having first studied Biology and ecology, I am interested in conservation topics. Happy to join this group :)

👋 Suzanne Stathatos, Armi Tiihonen, Mikey Tabak, Jose Ruiz-Munoz
👍 Talia Speaker, Déva Sou
Talia Speaker (talia.speaker@wildlabs.net)
2022-12-13 16:06:40

Hi everyone! Carly shared a while ago, but wanted to flag again that our WILDLABS State of Conservation Technology Survey 2022 is open until Dec 30th. The aim of this annual research is to track and inform the evolution of conservation tech, and we'd love to have more AI for Conservation community perspectives represented. Last year, our global assessment showed that ML/computer vision tools were viewed as having the highest untapped potential, so it will be interesting to see if and how that shifts this year, and how needs are changing. Thanks so much in advance for your time and please share widely! More background info here.

colostate.az1.qualtrics.com
❤️ Carly Batist, Sara Beery, Pen-Yuan Hsing
Sasha Luccioni (sasha.luccioni@huggingface.co)
2022-12-13 17:41:36

Hi all! My name is Sasha and I'm a researcher at Hugging Face 🤗, an AI startup that aims to democratize AI and make datasets and models more accessible to more communities. I recently worked with @Dan Morris to add the LILA camera trap data to the HuggingFace Hub (the central place where people can share resources), and I'm looking forward to adding more datasets and more modalities 🖼️ 🔊 📃 🐘 :giraffe_face: 🐋 🌴 Reach out if this sparks any ideas on your end!

👋 gvanhorn, Ben Weinstein, Elizabeth Bondi, Caleb Robinson, Dan Morris, Declan, Jason Parham, Katelyn Morrison, Nico Lang, Cameron Trotter, Timm Haucke, Viktor Domazetoski, Josh Veitch-Michaelis, Mikey Tabak, Carly Batist, Eddie Zhang, Ștefan Istrate, Jose Ruiz-Munoz, Sara Beery, Lindsey Dukles
😎 Jon Van Oast, Jason Parham, Timm Haucke, Sara Beery, Lindsey Dukles
🤗 Suzanne Stathatos, Jason Parham, Timm Haucke
❤️ Justin Kay, Caleb Robinson, Jason Parham, Timm Haucke
Ben Weinstein (benweinstein2010@gmail.com)
2022-12-13 17:43:21

*Thread Reply:* We have tree data and a model that would be nice to contribute. https://deepforest.readthedocs.io/, https://github.com/weecology/NeonTreeEvaluation. Plus tree species data that will be around soon. Some overlap with datasets that torchgeo @Caleb Robinson have done a great job compiling.

Stars
80
Language
Python
Sasha Luccioni (sasha.luccioni@huggingface.co)
2022-12-13 17:44:09

*Thread Reply:* Sure, let me know if I can help you upload it! The documentation is pretty good, FWIW : https://huggingface.co/docs/datasets/index

huggingface.co
👍 Ben Weinstein
Caleb Robinson (calebrob6@gmail.com)
2023-02-01 01:28:22

*Thread Reply:* (reviving a 2 month old thread out of absolutely nowhere) @Ben Weinstein FYI we've started putting some torchgeo stuff on HuggingFace (https://huggingface.co/torchgeo). In particular we're rehosting some datasets here, pros: • WayYyy faster download speeds than rehosting stuff on Zenodo! • We've hit a couple of cases now where one our dataset stops working because the SSL certificate of wherever it was hosted had expired. Most recently with the EuroSAT dataset https://github.com/phelber/EuroSAT/issues/10. We're betting that HuggingFace knows way more about sys admin-ing than random academics 🙂 cons: • no idea, seems good

🤗 Sasha Luccioni
👍 Ben Weinstein
Sasha Luccioni (sasha.luccioni@huggingface.co)
2023-02-01 09:38:48

*Thread Reply:* haha let me know if you have any questions, y'all :facewithcowboy_hat:

❤️ Caleb Robinson
Mikey Tabak (tabakma@gmail.com)
2022-12-14 10:46:12

Hi Folks, I started working for a new environmental data science consulting company, Quantitative Science Consulting, and we’re seeking new clients. I have a PhD in Ecology, with extensive experience with machine learning, deep learning / computer vision, geospatial modeling, and hierarchical Bayesian modeling. I have done quite a bit of work using computer vision for #camera_traps, as well as for drone and satellite imagery. We are committed to taking on projects that contribute to mitigating global change, especially climate change and conservation.

As this company is very small, we hope to be able to keep costs lower than larger consulting companies so that we can do some lower budget projects that will have important conservation implications. Please consider us for your future data science needs.

If you have any questions, or are interested in a quote, please reach out to me at tabakma@gmail.com or on LinkedIn, or contact QSC on LinkedIn.

🎉 Sara Beery, Lucia Gordon
Katie Wetstone (she, her) (katie@drivendata.org)
2022-12-14 16:46:04

Hi y'all! 👋 I'm a data scientist at a small consulting company focused on social good, and wanted to share a programming competition we just posted! Competitions are generally a great way to break into the programming space and get some python creds 🐍

Today we launched an exciting computer vision challenge in partnership with NASA :female_astronaut: to identify harmful algal blooms in important water sources. Even better, it's punnily named the Tick Tick Bloom: Harmful Algal Bloom Detection Challenge!

🌊 🌿 The goal of the competition is to help public health officials detect and address algal blooms in water sources like lakes and reservoirs, which can pose serious dangers to both human and ecosystem health 📡 You can create new state-of-the-art methods using satellite imagery to protect communities across the US 💸 There's a $30,000 prize pool for the top performing teams!

You can learn more and sign up to participate by visiting the challenge here. Feel free to share with anyone else you think might be interested, and reach out if you have any questions! Happy coding 🙂

DrivenData
🙌 Josh Seltzer, Lucia Gordon, Sara Beery, Casey Youngflesh, Peter Bull, Aleksis Pirinen, Jose Ruiz-Munoz, Tiziana Gelmi Candusso, Timm Haucke, Stephanie O'Donnell, Carl Boettiger, Yseult Hb, Nicolas Arrieta Larraza
📡 Carl Boettiger
Kateryna Nekhomiazh (kateryna.nekhomiazh@mail.utoronto.ca)
2022-12-15 15:56:01

Hi all! My name is Kate, I’m first year MSc in computer science at the University of Toronto, doing research in reinforcement learning.

I recently discovered the potential for AI to help with environmental conservation efforts. I am now intensively learning about this topic and am eager to apply my skills in machine learning and AI to make a positive impact on the world. I am particularly interested in the impact of unconscious consumption and war on the Earth’s ecosystem.

If you have any projects that you need help with, or to which you plan to attract students, I would love to participate!

👋 Sara Beery, Josh Seltzer, Suzanne Stathatos, Andrew Schulz, Lucia Gordon, Dan Morris, Aleksis Pirinen, Timm Haucke, Lindsey Dukles
😎 Jon Van Oast
Josh Seltzer (jyseltz@gmail.com)
2022-12-15 16:01:07

*Thread Reply:* Hey @Kateryna Nekhomiazh welcome!! I'm a UofT alum, so nice to see you here 🙇 I'm in the early stages of working on something you might be interested in, I'll reach out once it's a bit more concrete! Since you're interested in RL (and things at least tangential to policy), you might be interested in AI Economist (they also have an active slack community and lots of events) as well

Website
<https://www.einstein.ai/the-ai-economist>
Stars
915
Kateryna Nekhomiazh (kateryna.nekhomiazh@mail.utoronto.ca)
2022-12-15 16:03:45

*Thread Reply:* thank you so much, I will look forward to it!

Peggy Bevan (ucbtbev@ucl.ac.uk)
2022-12-18 05:56:06

Hi all 🙂 I’m wondering if anyone has some recommendations for ML textbooks useful for postgrad students (Master’s level)? We are creating a reading list for students learning ML for Ecology but they will be from a range of backgrounds, some not computer scientists. Thanks!

👍 Omiros Pantazis, Kakani Katija
Ethan Shafron (ethan.shafron@gmail.com)
2022-12-18 10:24:00

*Thread Reply:* This is a great book for general ML concepts and algorithms, and it's all free - https://www.statlearning.com/

🙏 Peggy Bevan
Matt Weldy (matthewjweldy@gmail.com)
2022-12-18 10:32:24

*Thread Reply:* I enjoyed working through this one. Although, it is 'deep learning' focused so there isnt any text space on other learning approaches. It has code available in the three largest training frameworks. https://d2l.ai/

🙌 Peggy Bevan
Graeme Phillipson (graeme.phillipson@bbc.co.uk)
2022-12-18 11:17:21

*Thread Reply:* I often use the google ML crash course for that kind of thing https://developers.google.com/machine-learning/crash-course

Google Developers
👍 Peggy Bevan
Josh Veitch-Michaelis (j.veitchmichaelis@gmail.com)
2022-12-18 16:47:17

*Thread Reply:* I personally found the presentation in Introduction to Statistical Learning quite dry. It does cover R, which ecologists like (and if you need them to learn R, then that could be important), and it does cover the theory from first principles. Despite the "introduction", it is still quite mathsy and I think you need to read quite a lot of it before you can actually use the information. I would say it's an approachable theory book to accompany more practical books. There is also a MOOC available from the authors.

Book-wise, I like Machine Learning with Applications in Scikit-Learn and Tensorflow by Aurelien Geron. It uses Tensorflow/Keras and it covers "classical" ML too. There's also the Fast AI book if you want something PyTorch focused (and I think is free?) Both are O'Reilly. I've heard good things about d2l.

Be warned that there is a lot of cruft in O'Reilly's offerings these days and I think their editorial standards aren't as high as they used to be. Manning has some nice books - I'm reading Human-In-The-Loop Machine Learning at the moment.

In terms of courses, Fast AI is enduringly popular. My only complaint is that they rely a lot on their library which I think adds unnecessary fragmentation when things like Pytorch-Lightning exist. But, the course is presented at a good level for beginners, the API is easy, and there are a load of little tips and tricks for Python/Jupyter.

For Deep Learning theory, I really think Andrej Karpathy's CS231 lectures are some of the best out there even though they're quite old (specifically the season where he taught). Also hey, it's 2022 and ResNets are still popular. He's also started a new series on YouTube which gives a fantastic overview of building a DL library from scratch. His talent is convincing you that actually the code that runs these mega networks is conceptually quite simple. The videos also have accompanying exercises in Colab, which is great. Like Fast AI he throws in a lot of neat Python stuff which you might not be familiar with (and it's interesting to see how he approaches problems and tooling). Of course there's Andrew Ng's course on ML which is the OG 🙂

🙏 Peggy Bevan, Dhruv Sheth
Silvia Zuffi (silvia@mi.imati.cnr.it)
2022-12-19 04:35:58

*Thread Reply:* I would recommend Bishop’s book https://www.microsoft.com/en-us/research/people/cmbishop/prml-book/

Microsoft Research
👍 Peggy Bevan, Dhruv Sheth
➕ Atul Ingle
Toryn Schafer (tschafer@tamu.edu)
2022-12-19 10:40:13

*Thread Reply:* Not specifically textbooks or ML, but the Ecological Forecasting Initiative have put together a great list of educational resources and I'm sure there is intersection with ML: https://ecoforecast.org/resources/educational-resources/

👍 Peggy Bevan, Carly Batist, Dhruv Sheth
Peggy Bevan (ucbtbev@ucl.ac.uk)
2022-12-19 11:10:07

*Thread Reply:* Thanks so much everyone! This is incredibly helpful 🤗🤗

Sankaran (shun-ka-run) (sankaranv@cs.umass.edu)
2022-12-21 07:56:40

*Thread Reply:* I would suggest using lecture notes and slides from a course (there’s so many, Stanford CS 229 seems to be popular though) rather than a textbook to help build a foundation in ML, and then use another resource specifically for deep learning. I learned from Kevin Murphy’s book (Machine Learning: A Probabilistic Perspective) which gave me a great foundation but is very huge and covers a lot of methods in depth - not sure is the best use of time for someone trying to get ahead to applied work.

I really liked this resource for deep learning. I wasn’t a fan of the textbook from Goodfellow, Bengio and Courville that is pretty much the standard in courses, there really isn’t another I know of that is widely adopted though. IMO the online resources and notebooks blow them out of the water, but I’ve never seen one where the sections on ML that precede the deep learning sections are very good. https://uvadlc-notebooks.readthedocs.io/en/latest/ https://d2l.ai

Cody Kupferschmidt (kupfersc@uoguelph.ca)
2023-01-04 10:00:10

*Thread Reply:* A Course in Machine Learning by Hal Daumé is free to download and quite good. http://ciml.info/

Lukas Picek (lukaspicek@gmail.com)
2022-12-19 14:08:08

Great News! The FGVC10 workshop was accepted to CVPR 2023 and will take place in Vancouver. Interested in fine-grained learning and its applications in biodiversity and conservation? Consider submitting a paper to FGVC10 or participate in one of our competitions.

More information on the website: https://sites.google.com/view/fgvc10

CVPR #CVPR2023 #FGVC

😍 Sara Beery, Fagner Cunha, Marek Hruz, Oisin Mac Aodha, Lukas Picek, Andrew Schulz, Elijah Cole (Deactivated), Nico Lang, Jason Holmberg (Wild Me), Yuanqi Du, Robin Zbinden, Georgia Atkinson, Timm Haucke, Justin Kay
🎉 Lukas Picek, Jon Van Oast, Omiros Pantazis, Aleksis Pirinen, Josh Seltzer, Jason Holmberg (Wild Me), Dan Morris, Diego Marcos, Cameron Trotter, Riccardo de Lutio, Yuanqi Du, Timm Haucke
:thumbsup_all: Frederic Fol Leymarie, Jose Ruiz-Munoz, Timm Haucke
👍 Kostas Papafitsoros
Nico Lang (nila@di.ku.dk)
2023-03-06 17:11:54

*Thread Reply:* FGVC10's submission website is now open for your 4-page extended abstracts on fine-grained recognition and related topics: https://sites.google.com/view/fgvc10/submission Deadline: March 20

🙏 Oisin Mac Aodha, gvanhorn, Andy Viet Huynh, Lukas Picek, Sara Beery
🎉 Jon Van Oast, Ronny Hänsch, gvanhorn, Andy Viet Huynh, Fagner Cunha, Lukas Picek, Riccardo de Lutio, Sara Beery, Dan Morris
Josh Seltzer (jyseltz@gmail.com)
2022-12-20 12:37:57

I just came across the GPAI Biodiversity & Artificial Intelligence report, published last month - I see it has a lot of citations and contributions from some community members here, and for myself at least it's a very useful resource which I'll be using to guide future work!

😎 Jon Van Oast, Sara Beery, Yseult Hb, Hannah Murray, Wenxin Yang, Carly Batist, Emilio Luz-Ricca, Marconi Campos, Kateryna Nekhomiazh
👍 Otto Brookes, Armi Tiihonen, Monty Ammar
Devis Tuia (devis.tuia@epfl.ch)
2022-12-21 03:59:24

Seems that there will be quite some “Earth” workshops at CVPR this year! To add to the good news of FGCV10 above (and I let the cv4animals folks announce their own later 😉 ), I am also happy to announce the seventh edition of EarthVision 🌍📡! More info here: https://www.grss-ieee.org/events/earthvision-2023/ (we are still populating the website, but at least the deadlines are on…)

GRSS-IEEE
Est. reading time
4 minutes
🎉 Lukas Picek, Oisin Mac Aodha, Nico Lang, Omiros Pantazis, Subhransu Maji, Timm Haucke, Aleksis Pirinen, Robin Zbinden, Olivier Dietrich, Adam Noach, Sara Beery, Iván Higuera-Mendieta, Emilio Luz-Ricca, Aakash Gupta, Kasirat
🍁 Burak Ekim, Frederic Fol Leymarie
🌎 Oisin Mac Aodha, Nico Lang, Valentin Gabeff, Omiros Pantazis, Aleksis Pirinen, Robin Zbinden, Majid Mirmehdi
😍 Sara Beery
Silvia Zuffi (silvia@mi.imati.cnr.it)
2022-12-21 12:02:47

Yes, CV4animals will be at CVPR again!!! Stay tuned for a proper announcement with the new website! In the meantime I can anticipate the list of speakers, as I am super happy about it: Kostas Daniilidis, Tanya Berger-Wolf, Katija Kakani, Arsha Nagrani, Albert Ali Salah!

👍 Subhransu Maji, Oisin Mac Aodha, Elijah Cole (Deactivated), Sara Beery, Felipe Parodi, Katelyn Morrison, Jason Parham, Omiros Pantazis, Aleksis Pirinen, Anton Alvarez, Jose Ruiz-Munoz, Timm Haucke, Cameron Trotter, Devis Tuia, Robin Zbinden, Majid Mirmehdi, Otto Brookes, Yseult Hb, Justin Kay, Paul Janson, Levi Cai, Genevieve Moat
🐕 Sara Beery, Robin Zbinden, Emilio Luz-Ricca
🎉 Sofía Miñano
Devis Tuia (devis.tuia@epfl.ch)
2023-01-10 02:39:29

*Thread Reply:* details details 🙂, we want details!

Silvia Zuffi (silvia@mi.imati.cnr.it)
2023-01-11 05:09:54

*Thread Reply:* Yes, soon!

Silvia Zuffi (silvia@mi.imati.cnr.it)
2023-01-18 07:21:35

*Thread Reply:* Here it is! https://www.cv4animals.com/ Please note that we accept work-in-progress papers, as we want our workshop to be an opportunity for people to interact and have feedback on their work. Hope to see many submissions from this community!

cv4animals.com
🙌 Stephanie O'Donnell, Vincent Christlein, Cameron Trotter, Oisin Mac Aodha, Dan Morris, Emilio Luz-Ricca, Valentin Gabeff, Sara Beery, Mahir Patel, Sofía Miñano
:squirrel: Andrew Schulz
Carly Batist (cbatist@gradcenter.cuny.edu)
2022-12-27 14:00:34

https://connectedconservation.foundation/news/satellites-for-biodiversity-award/

CCF
👍 Wenxin Yang, Robin Zbinden, Jose Ruiz-Munoz, Sara Beery, Declan
👍:skin_tone_5: Ando Shah
Andrew Schulz (akschulz@gatech.edu)
2022-12-28 14:08:14

Did not know if people are interested in this group, but this is the syllabus from the Conservation Tech course at Georgia Tech that is for undergraduate engineers, computer scientists, and biologists. I'm working over the winter break to get more of the content online and when it is online it will all be linked on this syllabus! https://zenodo.org/record/7470257#.Y6yTW3aZND9

Zenodo
👍 Jose Ruiz-Munoz, Dan Morris, Ted Schmitt, Tiziana Gelmi Candusso, Wenxin Yang, Alessandra Sellini, Cameron Trotter, Sara Beery, Dhruv Sheth, Sankaran (shun-ka-run), George Colaco, Kalindi Fonda, Timm Haucke
❤️ Carly Batist, Anton Alvarez, Yseult Hb, Dhruv Sheth, Rebecca Wilks
Lindsey Dukles (lindseydukles@gmail.com)
2022-12-30 11:22:08

Hi All!👋

I'm Lindsey and am new here! @Michael Bunsen introduced me to this group and it has been inspiring to see what everyone is working on!

I'm an NYC-based Software Engineer with a background in conservation biology. I recently completed a coding bootcamp and am looking to land my first software engineering role with a company whose mission I believe in. I have a background in chemical lab management, outdoor education, and the service industry. Outside of work you can find me birding 🐦, botanizing 🌼, climbing 🧗‍♂️, or reading 📕.

Excited this group exists and to connect with you all. Feel free to reach out to me here or linkedin!

👋 Josh Seltzer, Sara Beery, Eddie Zhang, Carly Batist, Yves Bas, Kateryna Morhun, Dan Morris, Antonio Ferraz, Jose Ruiz-Munoz, Jorrit van Gils
Alexander Kobald (avkobald@gmail.com)
2023-01-02 10:20:12

Hi everyone! I'm Alex, I co-direct the Design Across Scales Lab, an architecture research lab at Cornell University. @Sara Beery pointed me to this group when we met to share notes on some of our work on detecting and modeling urban forests. Excited to see what everyone is working and happy to be here!

😎 Jason Holmberg (Wild Me), Sara Beery, Jorrit van Gils
Ben Weinstein (benweinstein2010@gmail.com)
2023-01-02 18:29:48

*Thread Reply:* cool demo! Are we looking at species predictions or ground truth? If predictions, from LiDAR only? Point density? Any publications/white paper to go with these? We work on similar themes using HSI. https://www.biorxiv.org/content/10.1101/2022.12.07.519493v1. Working on semi-supervision now.

bioRxiv
Joe Ferdinando (jgf94@cornell.edu)
2023-01-03 12:28:02

*Thread Reply:* These are "ground truth" from the NYC 2015 tree census

👍 Ben Weinstein
Devis Tuia (devis.tuia@epfl.ch)
2023-01-03 07:46:44

Little reminder, deadline January 9th: 💥 MEGA PROJECT and JOB ALERT: Hello everyone! We (U. Schultz, @Devis Tuia, @Blair Costelloe @Tilo Burghardt, M. Wikelski, B. Risse and many more) are happy to share that we will start early next year the project WILDDRONE (wilddrone.eu)! It is a Marie Curie Network, meaning that we will build a network of 13 PhD students :femalestudent:🧑‍🎓 across Europe around themes of Drones for conservation in Africa :zebraface:, with PhD topics from computer vision, to robotics and ecology! This also means that we need your help 🆘, dear members, to recruit.. can you help us out by sharing this link among your peers/students/friends? https://wilddrone.eu/recruitment/

WildDrone - Drones for Nature Conservation
💥 Robin Zbinden, Josh Seltzer, Sara Beery, Kateryna Morhun, Yseult Hb, Eddie Zhang, Emilio Luz-Ricca, Jorrit van Gils
😎 Jon Van Oast
📝 Remi Gosselin
benjamin de charmoy (benjamin.decharmoy@gmail.com)
2023-01-03 13:19:08

*Thread Reply:* This looks super cool. It wasn’t immediately obvious if it’s open to folks from non European countries. I’m from South Africa and would love to apply. I’ll reread the details to check. Just reading the well structured themes and projects is interesting/exciting. Thanks for sharing

Blair Costelloe (blaircostelloe@gmail.com)
2023-01-03 13:22:08

*Thread Reply:* It is open to all nationalities!

Blair Costelloe (blaircostelloe@gmail.com)
2023-01-03 13:22:54

*Thread Reply:* The only limit is the MSCN mobility rule, dependent on the country of the hosting institution – ‘No residence or main activity (work, studies etc.) in the country of the recruiting beneficiary for more than 12 months in the 36 months before their recruitment date’.

benjamin de charmoy (benjamin.decharmoy@gmail.com)
2023-01-03 13:24:15

*Thread Reply:* Awesome 😎 ah! I see that now. Thank you

Blair Costelloe (blaircostelloe@gmail.com)
2023-01-03 13:25:27

*Thread Reply:* No problem! Let us know if you have any other questions as you write the application

✊ benjamin de charmoy
Devis Tuia (devis.tuia@epfl.ch)
2023-01-03 14:39:35

*Thread Reply:* definitely, there is no nationality restriction, apart from not having lived recently in the country that will host you!

🥸 benjamin de charmoy
🥳 benjamin de charmoy
Remi Gosselin (remipgosselin@gmail.com)
2023-01-03 14:47:38

*Thread Reply:* Finishing my application later today! :femalestudent::skintone2::femaletechnologist::skintone2:

🙏 Devis Tuia
🙌 Blair Costelloe, benjamin de charmoy
Sagar Nagaraj Simha (sagarnagarajsimha@gmail.com)
2023-01-03 23:55:24

*Thread Reply:* Thank you for this wonderful consortium. I am excited to apply. I had a few general queries while uploading the application materials on the portal

  1. Project description - Does this refer to the 2 page motivation letter?
  2. List of publications - I already have a list described in my CV. Should I also list in a separate document and upload?
  3. Publication - I assumed I should upload a chapter from my thesis here, as mentioned in the requirements. Can I also just upload my entire thesis since the file size is within 10 Mb?
  4. References - I list out the details of my referees in a document ? Since they would be contacted in the second stage.
Devis Tuia (devis.tuia@epfl.ch)
2023-01-04 02:44:13

*Thread Reply:* Hello,

  1. yes, it is the motivation for you to join the consortium and for the (up to) three projects you apply for among the 13
  2. I guess it does not harm to upload a 2nd one
  3. Perfect. Otherwise try to compress it a bit or send a link to the full version if it is too big for the SDU server
  4. Yes, a one pager with the name and addresses of up to three references
Sagar Nagaraj Simha (sagarnagarajsimha@gmail.com)
2023-01-04 02:53:27

*Thread Reply:* Noted. Thank you for all the clairifications Professor.

👍 Devis Tuia
Emilio Luz-Ricca (eluzricca@email.wm.edu)
2023-01-08 22:15:02

*Thread Reply:* I am having some trouble with the SDU portal for submitting my application… I have tried submitting multiple times today, but have not received a confirmation receipt yet. The portal says that this means that the application has not been registered properly. Is this an issue that other applicants are experiencing? Is there any other way to submit my application by tomorrow’s deadline if the portal continues having issues? Thank you!

Emilio Luz-Ricca (eluzricca@email.wm.edu)
2023-01-08 23:16:57

*Thread Reply:* Looks like the application went through in the end. Apologies if multiple submissions appear!

Devis Tuia (devis.tuia@epfl.ch)
2023-01-09 03:57:31

*Thread Reply:* no problem!

Devis Tuia (devis.tuia@epfl.ch)
2023-01-09 03:57:43

*Thread Reply:* I would have contacted the SDU admin if the problem persisted

benjamin de charmoy (benjamin.decharmoy@gmail.com)
2023-02-02 02:31:39

*Thread Reply:* @Emilio Luz-Ricca have you heard back regarding your application? I submitted as well and haven’t received anything further. Just curious. Thanks 😊

Devis Tuia (devis.tuia@epfl.ch)
2023-02-02 02:51:02

*Thread Reply:* hello Benjamin. We are going through all the application as we speak. It takes time, and especially since we are coordinating over 13 PIs. You should hear back from us in a week or two. Thanks for your patience.

benjamin de charmoy (benjamin.decharmoy@gmail.com)
2023-02-02 04:12:30

*Thread Reply:* Awesome, I fully understand. And thank you 😊 I was just curious not really expecting anything so soon. Thanks for the swift response.

Sagar Nagaraj Simha (sagarnagarajsimha@gmail.com)
2023-02-07 15:43:59

*Thread Reply:* https://wilddrone.eu/kick-off-meeting/

WildDrone - Drones for Nature Conservation
Sagar Nagaraj Simha (sagarnagarajsimha@gmail.com)
2023-02-07 15:44:18

*Thread Reply:* @benjamin de charmoy I have been curiously waiting too. This link may help follow the happenings at Wilddrone

Sagar Nagaraj Simha (sagarnagarajsimha@gmail.com)
2023-02-22 09:11:30

*Thread Reply:* @benjamin de charmoy @Emilio Luz-Ricca Did you receive any responses so far, apart from the assignment of review committee maybe ?

benjamin de charmoy (benjamin.decharmoy@gmail.com)
2023-02-22 14:44:53

*Thread Reply:* I did. I was unsuccessful 🦑 Got a nice kind message 😊 keen to see the results some day as it looks like a strong initiative 🚀

Sagar Nagaraj Simha (sagarnagarajsimha@gmail.com)
2023-02-22 15:22:13

*Thread Reply:* Oh! That’s unfortunate. Thanks for the reply. I am yet to hear the decision. I wish you good luck with other opportunities!

Thijs (thijs@q42.nl)
2023-01-04 04:00:43

I'm pretty happy to say that a paper I co-authored is online now! http://doi.org/10.1111/2041-210X.14036

It's about a pilot we did in Gabon with realtime AI camera's to validate a system to mitigate Human-Elephant conflicts.

It was the first time I co-authored a scientific paper, was really fun to do and learn about the process.

🎉 Cameron Trotter, Jose Ruiz-Munoz, Declan, Lucia Gordon, Dan Morris, Sara Beery, Carly Batist, Rita Pucci, Emilio Luz-Ricca, Mark Fisher, Timm Haucke, Pen-Yuan Hsing, Alexander Robillard, Rebecca Wilks
Thijs (thijs@q42.nl)
2023-01-04 04:32:58

Se this post for more info and a cool video about our project: https://www.linkedin.com/posts/tsuijtenai-enabled-camera-traps-for-protecting-elephants-activity-7016335238213607424-B0e6?utmsource=share&utmmedium=memberdesktop|https://www.linkedin.com/posts/tsuijtenai-enabled-camera-traps-for-protecting-elepha[…]238213607424-B0e6?utmsource=share&utmmedium=memberdesktop

linkedin.com
🎉 Jon Van Oast, Carly Batist, Ed Miller, Rita Pucci, Sam Kelly, Dhruv Sheth, Abhay
🎶 Jon Van Oast, Dhruv Sheth
Ed Miller (ed@hypraptive.com)
2023-01-04 18:08:55

*Thread Reply:* Very cool. We are hoping to do something similar with the BearID Project as some point. I sent you a connection request on LinkedIn!

Jorrit van Gils (vangilsjorrit@gmail.com)
2023-01-06 05:06:30

*Thread Reply:* Great work Thijs! 👏

❤️ Thijs
Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-01-07 12:44:24

*Thread Reply:* Read your paper. Awesome work with the use of a mini computer. I am curious what will be the cost of a single compute unit?

Pen-Yuan Hsing (penyuanhsing@posteo.is)
2023-01-08 18:42:24

*Thread Reply:* Super cool work! Sorry haven't read the full paper yet, but what's the power consumption like for the AI and satellite connection?

Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-01-10 23:35:18

*Thread Reply:* @Jorrit van Gils @Thijs super interested in these figures. Cost and power consumption. I have understanding of AI models and deploying them. But I don't know a lot about embedded compute. Will be great if you can shed light on it or point to some relevant resources (if possible) 🙂

Thijs (thijs@q42.nl)
2023-01-11 02:55:18

*Thread Reply:* @Aakash Gupta there are many technical details in the supplementary materials of the paper.

👍 Aakash Gupta
Thijs (thijs@q42.nl)
2023-01-04 04:34:05

In this video you can see we actually witnessed a Human-Elephant conflict when we were rolling out our technology in Gabon. That is not something I experience every day 😇

Sara Beery (sbeery@caltech.edu)
2023-01-09 02:06:43

Very exciting new faculty position(s) in conservation science at Berkeley 🙂

"Please help us spread the word about our exciting new search for one or possibly more Assistant Professors in Conservation Science in the Department of Environmental Science, Policy and Management at UC Berkeley.

We view conservation science as interdisciplinary and applied by necessity; combining aspects of biology, ecology, geography, data science and environmental policy, management, and justice to dissect threats and generate solutions related to biodiversity loss, climate change, land conversion, unsustainable consumption, water scarcity, wildfire, disease, invasive species, and other challenges. We seek applicants whose work engages conservation using (but not limited to) approaches from population or community ecology, wildlife, fisheries or habitat conservation, forest or rangeland science, computer/data science or ecoinformatics, protected area or working lands management, and conservation governance, planning, policy or effectiveness.

We appreciate any help you can provide in getting this advertisement out to your groups and professional networks.

The full position description is available here: https://aprecruit.berkeley.edu/JPF03613

The deadline for applications is February 2, 2023."

aprecruit.berkeley.edu
Apply by
Feb 2, 2023
Department
Environ Sci, Policy &amp; Mgmt - College of Natural Resources
😊 Aleksis Pirinen, Anton Alvarez, Fagner Cunha, Yseult Hb
👍 Oisin Mac Aodha, gvanhorn, Shivam Shrotriya
😎 Jon Van Oast
Benno Simmons (benno.simmons@gmail.com)
2023-01-10 09:21:37

Hi all. Just a quick question and potential collaboration opportunity on a grant application. Is anyone aware of/own any labelled camera trap datasets from UK woodlands? Any taxa is fine.

Cathy Atkinson (cathy.atkinson@highlandsrewilding.co.uk)
2023-01-12 04:28:37

*Thread Reply:* I might be able to help, depending on what you need

Mahir Patel (mahirp@bu.edu)
2023-01-10 16:10:23

Hi everyone, I just wanted to share our recent work on a species-independent 3D Pose Optimization toolkit that works not only on lab settings (our Rodent3D dataset, Deepfly3D, Rat7M) but also on open-field settings (AcinoSet). It was recently published in IJCV, and we also presented it in the last CV4Animals workshop. Since then, we have been working on a pose optimization software (work in progress) that complements widely used toolkits such as DeepLabCut and DANNCE. It generates robust 3D tracking and postural analysis as of now only for calibrated camera systems; I hope it will be helpful to people from diverse fields once it matures. https://www.cs.bu.edu/faculty/betke/OptiPose

🙌 Josh Seltzer, Timm Haucke, Viktor Domazetoski, Aleksis Pirinen, Vincent Christlein, Paul Janson, benjamin de charmoy, Aamir Ahmad, Sara Beery, Yseult Hb, Genevieve Moat
👍 Ed Miller
Vincent Christlein (vincent.christlein@fau.de)
2023-01-11 03:39:59

Hi everyone!

👋 Nora Gourmelon, Robin Zbinden, Declan, Jose Ruiz-Munoz, Suzanne Stathatos, Mahir Patel, Aleksis Pirinen, Rita Pucci, Sara Beery, Lindsey Dukles, Yseult Hb, Ronan Wallace
Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-01-11 14:34:05

Hi everyone! We at the Flight Robotics and Perception Group just released SMALify: https://github.com/robot-perception-group/smalify -- a PyTorch based method to estimate pose and shape of animals for the SMAL animal model from multi-view images of the animal as input. For now it is tested with videos of Przewaskis horses and Grévy's zebras recorded with multiple drones flying simultaneously around them. It is easy to use and well documented, with examples and demos. It is developed for multi-view drone images but obviously not limited to it. One can use any number of camera sources. Will be happy to help anyone who wants to use it!

Language
Python
Last updated
42 minutes ago
🙌 Sofía Miñano, Sara Beery, Paul Janson, Anton Alvarez, Moira Shooter
Silvia Zuffi (silvia@mi.imati.cnr.it)
2023-01-12 02:02:14

*Thread Reply:* Nice, but maybe not the best choice of name. There is already an existing Smalify method from Benjiamin Biggs that is quite popular, maybe you can rename with SMALifymv for multiview? https://github.com/benjiebob/SMALify

Stars
63
Language
Python
Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-01-12 02:21:13

*Thread Reply:* Thanks! yes, good idea! too many similar names around 🙂

Tjomme Dooper (tjomme@fruitpunch.ai)
2023-01-16 05:35:03

Hi friends,

FruitPunch AI 🍉 has teamed up with researchers from Cornell & Stanford, and tech partners like AWS, Edge Impulse and RFCx to work on acoustic monitoring of elephant rumbles. using AI. Up to 50 AI enthusiasts and experts from all over the world will work together for 10 weeks to improve an audio processing pipeline, optimize ML models and implement them on-edge, starting in a month from now.

With the AI for Forest Elephants Challenge, we'll develop a prototype that can be deployed in the field for follow-up research into elephant populations and to prevent poaching and human-wildlife conflict.

We have some spots left for anyone with experience in Python and about a day per week to spare between February 17th and May 2nd. Registrations are open until February 17th. I'd love to see some of you there 🙂

fruitpunch.ai
🙌 Stephanie O'Donnell, Ali Johnston, Lindsey Dukles, Ronan Wallace, Sofía Miñano, Ed Miller, Yseult Hb, Dan Morris, Abhay, Aakash Gupta, Sara Beery, Dhruv Sheth, Alexander Kobald, Nicolas Arrieta Larraza
😎 Jon Van Oast, Rebecca Wilks
🎉 Jan Kees
Nick Giampietro (giampiet@pdx.edu)
2023-01-16 23:27:05

Is there anyone actively working on using AI/ML and remote-sensing techniques to aid/simplify the field work required for government reports like the US NEPA Environmental Assessment? There's surely a lot of overlap with many more general projects folks are doing, but I'm specifically looking for people using them to aid with stringent government bureaucratic processes 🙂

Declan (declan.pizzino@consbio.org)
2023-01-17 11:23:18

*Thread Reply:* This is in line with the sort of work that we do at CBI- providing science, analyses, and tools to support decision-makers. Unfortunately the fun RS + AI/ML projects happen (or are funded) less often, but we've got a couple that are active for the upcoming year or two. • Our current big project spinning up right now is for the USDA Conservation Reserve Program. The RS + AI/ML is just one component of many for this project. • We'll be doing some updates for habitat models supporting the Stephens' kangaroo rat Rangewide Conservation Plan

Conservation Biology Institute
Est. reading time
2 minutes
Conservation Biology Institute
Est. reading time
2 minutes
Pen-Yuan Hsing (penyuanhsing@posteo.is)
2023-01-17 12:08:22

Hi everyone,

I am conducting a ~10 minute survey ⏱️📋 which I hope you can share with your networks 🤝:skintone3::

🔗 https://ec.europa.eu/eusurvey/runner/research-baseline-survey

🔗 shortened link: https://t.ly/GWLb

This survey asks questions about the research culture in your field/discipline, including possible barriers to doing open science such as publishing ideas, methods, data, etc. I believe it is relevant here because better open research practices will enable the innovation that's needed for conservation!

⏳ The deadline for submission is end-of-day on Sunday, February 5 2023 (AoE).

:firstplacemedal: Everyone who completes and submits this survey can choose to enter a drawing for a prize worth GBP £25!!!

Anonymised results will be summarised in a report I am writing for UK Research and Innovation (UKRI), the primary public research funder in the United Kingdom. It will inform a better understanding of, and possible reforms to, research institutions and the development of new research publishing platform(s). I am doing this wearing my hat 👒 as an open science researcher at the University of Bristol in the Octopus project 🐙.

Most importantly, please share this survey among your networks! I hope to get responses from researchers representing diverse disciplines in the natural and social sciences, especially those who are NOT strongly aware of or advocates for open science. Or, if you can suggest other places to share this survey do let me know.

Please respond if you have any questions/concerns, and thank you in advance for your time! 🙏:skintone3:🙇‍♀️:skintone3:

✅ Remi Gosselin, Sara Beery
Catherine Villeneuve (catherine.villeneuve.9@ulaval.ca)
2023-01-17 16:41:03

Are you interested in Predator-Prey Ecology | Machine learning | Movement analysis | Modelling? BIOS2 (A Canadian computational ecology training program) and Sentinel North (A large trandisciplinary Arctic research and training strategy funded by the Canada First Research Excellence Fund) are organizing an advanced field school in computational ecology 🤖🐻‍❄️ from May 19 to 26 2023 in the beautiful setting of the Couvent de Val-Morin, Canada. Join us in a cozy cottage, by a beautiful lake, in the Boreal Forest 🌲

The field school is aimed at international grad students and post-docs in ecology, familiar with R or Python (but no need to be an expert!). Once at the cottage, we will create an original data set by playing a predator-prey interaction game called TrophIE (Trophic Interaction Experiments), where bio-logged players impersonate predators and preys in a real-life context. We will then analyze the data through 5 days of intensive workshops led by experienced mentors.

The goal is to delve into predator-prey theory and gain skills in machine learning, behavioural classification, movement analysis and advanced modelling of interaction networks.

For more information and to register: https://sentinellenord.ulaval.ca/en/ecology2023 Applications are open until 1st February 2023 (with a potential for a short extension of the deadline) and is limited to approximately 35 graduate students and post-docs.

Feel free to reach out to me if you are interested and have any questions 🙂

😮 Jon Van Oast, Arjun Subramonian (they/them), Benjamin Hoffman, Toryn Schafer, Yseult Hb, Sara Beery
🦊 Suzanne Stathatos, Arjun Subramonian (they/them), Benjamin Hoffman, Sara Beery
🐰 Suzanne Stathatos, Arjun Subramonian (they/them), Benjamin Hoffman
👍 Savannah Bissegger O'Connor
Savannah Bissegger O'Connor (savannah.bisseggeroconnor@gmail.com)
2023-01-18 15:42:53

Hi everyone! 👋

@Tim Elrick and I are happy to join the group! We are from McGill University and are estimating the scale of white-tailed deer overpopulation at the Gault Nature Reserve in Mont-Saint-Hilaire, Quebec using a drone equipped with a thermal camera. After detecting deer by eye in our ~550 thermal images, we are starting to look into the use of machine learning to detect deer signatures in our thermal images. We look forward to exchanging ideas here in Slack and would be happy if you could point us to studies or literature on this topic. ☺️

👋 Declan, Catherine Villeneuve, Aamir Ahmad, Dan Morris, Suzanne Stathatos, Jon Van Oast, Timm Haucke, Valentin Gabeff, Ali Johnston, Sara Beery, Björn Lütjens, Aakash Gupta
Josh Veitch-Michaelis (j.veitchmichaelis@gmail.com)
2023-01-18 18:04:58

*Thread Reply:* I built a drone-based system for thermal rhino detection a few years ago - basically the same thing. You will probably get good results provided you have decent contrast between the ground and the animal, especially if you can do it by eye. Here's a paper from my old group that discusses thermal animal detection at a high level:

https://www.astro.ljmu.ac.uk/~aricburk/Burke2018.pdf

Otherwise you can fine-tune normal "colour" object detection models and they mostly just work. I would recommend trying to classify radiometric images rather than the min-max scaled images, there is very little published research on this, but it should give you an improvement. Might require some custom dataloading because you have to load the 16-bit images and then normalise them.

I also have labelled data from a few flights of various ungulate species in the North of the UK which might be useful, not under forest cover though.

I think the most important thing to consider is your flight altitude vs size of the animal in pixels in the image. Try and make sure that your deer are at least 30 px. That can be tricky if you're flying over forest and especially with a thermal cam because you only get 640x512 at best. Otherwise it really depends if there are only deer in the images (e.g. can you just look for bright stuff and then confirm later).

Feel free to DM me., I'm happy to have a call if you want to discuss further!

👍 Jon Van Oast, Savannah Bissegger O'Connor
Savannah Bissegger O'Connor (savannah.bisseggeroconnor@gmail.com)
2023-01-19 20:03:08

*Thread Reply:* Hi Josh - thanks for your reply! I am actually already aware of the paper you sent and have referred to it in the past, especially the observing strategy optimization web tool!

For the "colour" object-detection models, do you have any software or algorithms in mind that we should look to? I am rather new to the machine learning world and the only software I personally have experience with is ENVI and their object-oriented classification / feature extraction tool. The tool was able to pick out the deer signatures in one of our higher contrast images, though I am not sure if this is the best tool for the job.

To touch on some of the things you mentioned: due to an unknown error, all our images appear to have turned out non-radiometric despite having a radiometric camera, and yes, we are lucky that we only had deer in our survey area (or rather, they are the only larger-sized mammal on the reserve).

Josh Veitch-Michaelis (j.veitchmichaelis@gmail.com)
2023-01-20 05:15:57

*Thread Reply:* Back in the day we used YOLO (for speed), but I think any object detector would be fine (e.g. Faster-RCNN, SSD, YOLOv8). There are quite a few "no code" solutions if you just want to upload a dataset and get a model out. It's more dependent on your dataset than the model architecture. I also think 90+% of people use non-radiometric, so that shouldn't be a problem even if it isn't optimal 🙂

What do you want to do with the detections? That may have some impact on which tool you choose, e.g. if you want to integrate with ENVI this is probably as good as any: https://www.l3harrisgeospatial.com/docs/ENVIDeepLearningTutorialObjectDetection.html

👍 Savannah Bissegger O'Connor
Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-01-21 23:42:31

*Thread Reply:* This is very interesting! Thanks for sharing. For this particular use-case you should think about how the model will generalize for out-of-sample images. Does the data points allow you to identify deer vs non-deer objects in the image?

👍 Savannah Bissegger O'Connor
Savannah Bissegger O'Connor (savannah.bisseggeroconnor@gmail.com)
2023-01-23 20:33:00

*Thread Reply:* @Josh Veitch-Michaelis Thanks for the suggestions! I will look into YOLO and the others you mentioned. We essentially have ~550 non-radiometric RGB images (like the one I attached) to iterate through, each having different levels of thermal contrast due to environmental and other factors. So, I assume we will have to take into consideration that the deer signatures will have different RGB values.

Savannah Bissegger O'Connor (savannah.bisseggeroconnor@gmail.com)
2023-01-23 20:38:10

*Thread Reply:* @Aakash Gupta Thanks for your comment! Please forgive my lack of expertise in this domain, but could you specify what you mean by the model generalizing out-of-sample images? (What is out-of-sample?).

Josh Veitch-Michaelis (j.veitchmichaelis@gmail.com)
2023-01-24 19:31:10

*Thread Reply:* @Savannah Bissegger O'Connor out-of-sample refers to how well your training data represents new, unseen, data you're likely to run the model on. A typical example would be you train the model in one forest, but deploy it in another and for some reason all the deer are a few degrees warmer and the model struggles to detect them.

Essentially as long as your training data looks sufficiently similar to where you'll deploy the model, you should be OK. But if you get poor performance when you deploy it, it's something to look out for.

👍 Savannah Bissegger O'Connor, Aakash Gupta
Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-02-03 21:44:45

*Thread Reply:* @Savannah Bissegger O'Connor , I believe Josh has explained it very well. To solve it you can use a technique known as data augmentation. You typically use image manipulation techniques to create variations of the image. This increases the total size of your training dataset as well as generalizing the model. But you should be vary of data leakage, while implementing it.

👍 Savannah Bissegger O'Connor
Jake Burton (jake.burton@fauna-flora.org)
2023-01-19 10:35:34

Hi everyone!

It’s great to be part of this group! I’m Jake and I started working as Project Officer at WILDLABS late last year.

I just wanted to share that applications are now open for our AI for Conservation Office Hours, where we are teaming up with @Dan Morris (from Google's AI for Nature and Society program) to offer conservationists facing AI or ML challenges in their work the chance to get advice from AI specialists in 1:1 virtual sessions.

If you think you could benefit from one of these sessions, then apply now! We would also be hugely grateful if you could share this in any of your networks that might be interested.

The deadline to apply is Friday 27 January 2023. Visit WILDLABS for more info and how to apply.

Thanks very much! 🐯

🙌 Sasha Luccioni, Stephanie O'Donnell, Alessandra Sellini, Toryn Schafer, Dan Morris, Sara Beery, Ed Miller, Timm Haucke
❤️ Carly Batist, Pen-Yuan Hsing, Jon Van Oast, Suzanne Stathatos, Talia Speaker, Timm Haucke
aruna (arunas@mit.edu)
2023-01-19 12:03:19

@here if you have 5m and you care about climate change, can you please help classify these pictures? https://forms.gle/H54vppXauJLNNMut5. Thank you! 🍀

Google Docs
😍 Björn Lütjens, Kateryna Nekhomiazh
🙌 Björn Lütjens
🌍 Björn Lütjens
Sasha Luccioni (sasha.luccioni@huggingface.co)
2023-01-19 12:03:46

*Thread Reply:* what's the purpose of this?

Björn Lütjens (bjoern.luetjens@gmail.com)
2023-01-19 12:06:49

*Thread Reply:* done - very fun and took less than 5min @Sasha Luccioni aruna is a kick-ass grad student in climate misinformation at MIT. i'm assuming this is for some of her research.

👍 Sasha Luccioni
❤️ aruna
aruna (arunas@mit.edu)
2023-01-19 12:07:55

*Thread Reply:* Thanks bjorn!

🙌 Björn Lütjens
aruna (arunas@mit.edu)
2023-01-19 12:08:04

*Thread Reply:* Sasha, yes, it's for a climate misinformation project. 🙂

👍 Sasha Luccioni
aruna (arunas@mit.edu)
2023-01-19 12:39:45

*Thread Reply:* Thanks everyone! ❤️ all good here, just closed the form. 🙂

Suzanne Stathatos (suzanne.stathatos@gmail.com)
2023-01-19 16:26:17

Has anyone here used Google Coral before for a project and if so, would you be willing to chat with me about it?

Alan Papalia (alanpapalia@gmail.com)
2023-01-19 19:23:57

*Thread Reply:* spent a decent amount of time on one this summer - overall takeaways here so others can see but happy to chat more as well! • Strengths: great at everything it's stated to be good at (fast, efficient, relatively easy to use if you can get a model that is compatible) • Ease of use: have to spend some time wrangling models to get them to compile to the format the corals accept (don't remember the exact format, but there were quite a few operations used in many models that were not supported). Best-case is you're using a model that already has a well-demonstrated ability to compile to the coral format (e.g. YOLO) • Model size: must use very small models. I'd have to look up docs to remember how small this is but I think it was somewhere in the range of yolo-small or yolo-tiny. Definitely saw some performance loss due to needing very small networks, but maybe could have clawed back some performance using clever tricks elsewhere.

👍 Jon Van Oast
Ed Miller (ed@hypraptive.com)
2023-01-19 20:09:02

*Thread Reply:* @Sam Kelly @Henrik Cox (Sentinel) Did you use Google Coral?

Sam Kelly (sam@conservationxlabs.org)
2023-01-19 20:16:18

*Thread Reply:* Hi Suzanne - Yes, we use the Coral SoM in the Sentinel. Would be happy to chat and share any learnings! Overall it’s pretty good, just some little quirks (I concur with what Alan mentioned) that I would be willing to chat about, definitely would love to hear more about how you are planning on using it. I know that @Thijs uses the Coral in some of their work.

Suzanne Stathatos (suzanne.stathatos@gmail.com)
2023-01-20 12:41:09

*Thread Reply:* Thank you all for your responses! Cool, right, I was chatting with a group that is trying to use it as an end-to-end system to monitor animals in an urban setting. From what everyone has said so far, it seems superb if you can use their out-of-the-box models and pretty good if you can use small models that can work via tflite.

Thijs (thijs@q42.nl)
2023-01-23 09:59:41

*Thread Reply:* Please let us know how it works for you @Suzanne Stathatos or if we can help in any way!

Ronan Wallace (rwallace@macalester.edu)
2023-01-22 12:39:43

Hi everyone! I'll be going on a little roadtrip soon, and I'm looking for some good podcasts on conservation tech. Are there any out there that you've enjoyed? (thank you so much!!) 😄

Josh Seltzer (jyseltz@gmail.com)
2023-01-22 13:07:42

*Thread Reply:* I'd love to hear some recommendations as well! The Mongabay podcast sometimes covers things related to conservation tech, but outside of that i haven't been able to find many good ones. There's a ton of great conservation and climate tech podcasts i've been listening to (happy to recommend if anyone's interested) but conservation tech seems definitely underrepresented!

Ștefan Istrate (stefan.istrate@gmail.com)
2023-01-22 16:11:09

*Thread Reply:* I really enjoyed @Roland Kays’ Wild Animals: https://open.spotify.com/show/1M5daW5fcOtXOjYkHkrZQF

Spotify
🙏 Roland Kays
Roland Kays (rwkays@ncsu.edu)
2023-01-22 18:02:31

*Thread Reply:* thanks - about to release season 3!

Roland Kays (rwkays@ncsu.edu)
2023-01-22 18:04:18

*Thread Reply:* Also working on a new you tube channel, 2 episodes up so far https://www.youtube.com/channel/UCTMtNawoWB8-7M8waKnuiqg

YouTube
🙌 Josh Seltzer, Ștefan Istrate
aruna (arunas@mit.edu)
2023-01-22 23:42:50

@here Thank you so much for participating in my last survey. Here's another survey that should take <5m on classifying images. Some of the images in this survey can be triggering so please participate only if you are able to. Thanks in advance: https://forms.gle/xyjtGPnU6b7eygvc8. Please direct any questions to me via DM (as opposed to on this channel/on the thread) so as to not bias others about the survey.

Google Docs
👍 Omiros Pantazis, Vijay Karthick
Thijs (thijs@q42.nl)
2023-01-23 10:00:43

Are there happen to be people here who deployed a YOLO(v5) model in the cloud? I'm trying to deploy one on Google AppEngine and I'm looking for some best-practices.

Dan Morris (agentmorris@gmail.com)
2023-01-23 12:13:23

*Thread Reply:* Some folks I know who are using MegaDetector v5 (which is just a trained YOLOv5) in the cloud, though AFAIK none via AppEngine: @Nicholas Osner, @Nathaniel Rindlaub, @Matt Hron

Thijs (thijs@q42.nl)
2023-01-23 13:22:51

*Thread Reply:* Awesome, thanks for the references. I'm trying to figure out what the most "lightweight" way is to load and run the model. For "native YOLOv5" it seems I have to include a ton of libraries.

I also exported my model to tflite but I cannot really find examples on how to load and run that model (my default tflite scripts don't seem to work).

I'm also experimenting with openvino

Dan Morris (agentmorris@gmail.com)
2023-01-23 13:57:21

*Thread Reply:* How are you measuring "lightweight"? If you measure by the disk footprint of your dependencies and/or the install time of your dependencies, yes, YOLOv5's native inference path is heavy, mostly because PyTorch itself is very large. But if you define "lightweight" by engineering complexity, I would consider eating the disk footprint of YOLOv5's dependency list. Their dependency setup has been tested in lots of environments, plus if you export to other formats, you likely won't get exactly the same results, which may complicate your debugging.

But if you are in an environment where disk space and/or install time are constrained, or where you already have lots of dependencies that are already compatible with one of the other export formats (e.g. TFLite), those are IMO great reasons not to use the native inference path.

Curious what you end up doing!

Thijs (thijs@q42.nl)
2023-01-23 14:19:04

*Thread Reply:* Yeah, currently my app won't even deploy because of the huge list of dependencies...

OpenVino seems to be nicer / quicker and less bulky. But I find it very hard to find the correct python script to run the model with OpenVino.

@Dan Morris you mean, try to peel away as many dependencies as I can and try to stick with pytorch?

Thijs (thijs@q42.nl)
2023-01-23 14:20:33

*Thread Reply:* Sometimes I really wonder how people het sh**t done in this space 😅 I've been a software engineer for over 20 years, but the whole ML space seems to be the most frustrating so far 😇 Especially if you don't "just" wanna train a model, but actually want to use it somewhere.

Dan Morris (agentmorris@gmail.com)
2023-01-23 16:32:28

*Thread Reply:* It's hard for me to weigh in on whether you should try to eliminate individual dependencies or just make it work as-is in your deployment environment. Whatever issues you encountered in deployment, it may be easier to fix them and keep the YOLOv5 inference environment totally intact, as opposed to modifying the inference environment to fit your application. But, that's easy for me to say, since I don't know what those issues are and I don't have to fix them. 🙂

My simplified mental model is that all application environments are just Linux VMs that support arbitrary Docker containers, in which case it probably is easiest to keep YOLOv5 exactly as it is, dependencies and all. But that model is clearly oversimplified, and if the payloads your execution environment can run have limitations, maybe it makes sense to either prune problematic dependencies away from the YOLOv5 code or switch to another runtime.

Good luck!

Thijs (thijs@q42.nl)
2023-01-24 09:22:48

*Thread Reply:* It seems like this is the best option for me: https://learnopencv.com/object-detection-using-yolov5-and-opencv-dnn-in-c-and-python/

It's exporting the model to ONNX format and then I only need the opencv-python-headless dependency to do inferencing.

LearnOpenCV – Learn OpenCV, PyTorch, Keras, Tensorflow with examples and tutorials
Michael Bunsen (notbot@gmail.com)
2023-01-23 19:46:21

Has there been any talk about upgrading this Slack workspace so we don't loose all the awesome resources older than 90 days? Perhaps we can crowd source the cost or get a sponsorship?

👍 Jose Ruiz-Munoz, Nicolas Arrieta Larraza
💯 Carly Batist, Katelyn Morrison
Andrew Schulz (akschulz@gatech.edu)
2023-01-24 03:41:59

*Thread Reply:* So I believe non-profits gets one free professional slack account and that would likely be the best way.

❤️ Katelyn Morrison
Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2023-01-24 04:37:08

*Thread Reply:* non profits do!

Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2023-01-24 04:37:27

*Thread Reply:* you just need to find a properly registered non profit that will give you their account

Eddie Zhang (ete@ucsb.edu)
2023-01-24 01:55:58

I wonder how expensive it would be?

Nick Giampietro (giampiet@pdx.edu)
2023-01-24 02:03:11

About $10k/mo assuming Pro tier and no special deal from Slack 😨

Eddie Zhang (ete@ucsb.edu)
2023-01-24 04:51:46

Doesn’t seem too likely in that case 😁

Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2023-01-24 04:54:36

Definitely go find an ngo one you can use

Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2023-01-24 04:54:52

our slack workspace for wildlabs is free

Michael Bunsen (notbot@gmail.com)
2023-01-24 13:26:19

*Thread Reply:* Is the Slack for Wildlabs public? Or is that just for your internal use?

Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2023-01-24 13:40:41

*Thread Reply:* Oh it's our internal(ish) one. It also has all the people we're collaborating with on various projects and all our community organisers/group managers in there as well

Michael Bunsen (notbot@gmail.com)
2023-01-24 18:58:54

*Thread Reply:* Ah cool!

Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2023-01-24 04:55:23

You just need a registered NGO/charity that will let you use their free workspace

👍 Jose Ruiz-Munoz
Jose Ruiz-Munoz (jfruizmu@unal.edu.co)
2023-01-24 06:34:10

*Thread Reply:* Would we have to switch then to a new workspace (with a new name)?

Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2023-01-24 06:34:39

*Thread Reply:* no no

Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2023-01-24 06:35:51

*Thread Reply:* To upgrade this workspace you (the workspace owner - sara?) need to be able to input a charity number and say it's their workspace, then slack will upgrade it.

Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2023-01-24 06:37:32

*Thread Reply:* I'm sure one of the 1374 people in this group is associated with a group that isn't making use of their free account - and so would kindly donate it to the cause

Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2023-01-24 06:38:04

*Thread Reply:* OR - someone goes through the hassle (cost?) of registering 'Ai for conservation' as a charitable group?

Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2023-01-24 06:38:44

*Thread Reply:* OR someone knows someone at slack that will do us a solid and just give us a free workspace. SURELY in this group you guys know people working there

Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-01-24 07:45:37

*Thread Reply:* Just my 2 cents: seems like Slack free upgrade for NGO, etc. is limited to 250 users only. For more than that, there is an 85% discount -- so still some costs. the last option @Stephanie O'Donnell floated sounds best at the moment 🙂

👀 Stephanie O'Donnell
Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2023-01-24 08:11:11

*Thread Reply:* ooooh i didn't realise there was an upper limit. Also Andrew called this idea first!

Sara Beery (sbeery@caltech.edu)
2023-01-24 11:40:15

*Thread Reply:* There's a limit, I've looked into this but it would still be $$$ for a community this size 😞

Matt Allen (mja78@cam.ac.uk)
2023-01-24 07:35:37

Hello! I am putting together a benchmark dataset for tree crown segmentation - do you have data and would you like to join our effort?

Call for data: we're looking for exact tree crown segmentations (rather than bounding boxes) collocated with some form of (very) high resolution aerial imagery - drone or satellite. Segmentation derived from TLS or ALS data is absolutely ideal, but high quality manual labels are also ok.

In return, we can offer:

  • Co-authorship on a publication (likely at some point in 2023)
  • Access to the dataset for analysis prior to the full release

If there's anything else you think that I/we might be able to offer in return, feel free to ask!

Link for registering interest: bit.ly/crownseg Contact for questions: mja78@cam.ac.uk

Matt

👍 Stephanie O'Donnell, Emily Lines, Felipe Parodi, David Russell, Tiziana Gelmi Candusso, Sara Beery, Robin Zbinden
🌳 Tjomme Dooper
Benjamin Kellenberger (benjamin.kellenberger@yale.edu)
2023-01-24 08:01:51

*Thread Reply:* @Ben Weinstein

Mikey Tabak (tabakma@gmail.com)
2023-01-24 08:38:05

*Thread Reply:* Hi Matt, This sounds exciting. I don’t have the data you’re looking for, but I’m curious, what do you mean by very high resolution?

Felipe Parodi (parodifelipe07@gmail.com)
2023-01-24 10:07:13

*Thread Reply:* In case you do have bounding box annotations for each tree, you could use something like boxinstseg to predict the segmentation labels

Matt Allen (mja78@cam.ac.uk)
2023-01-24 13:27:51

*Thread Reply:* Re: resolution - at the moment my thinking is it would probably need to be ~30cm (I think that's the lowest resolution that it's currently possible to get? Someone feel free to correct me if I'm wrong though) to separate crowns at the individual level in areas where the species distribution is pretty homogeneous (in very diverse canopies it's quite easy to separate trees using hyperspectral data or stretching the RGB bands when you're labelling). Maybe 50cm ish would work.

👍 Mikey Tabak
Matt Allen (mja78@cam.ac.uk)
2023-01-24 13:31:19

*Thread Reply:* On boxintseg etc - For the purpose of assembling the dataset I'm really looking for ground truth data rather than unsupervised predictions. It would be very interesting to see whether weakly supervised approaches like this could reproduce the manual/laser scanning derived labels. I'd need the ground truth labels to check how well it was doing though

Ben Weinstein (benweinstein2010@gmail.com)
2023-01-24 13:35:21

*Thread Reply:* @Matt Allen, what's the goal? Check out the last couple datasets and the https://google.github.io/auto-arborist/ https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1009180.

journals.plos.org
Ben Weinstein (benweinstein2010@gmail.com)
2023-01-24 13:37:53

*Thread Reply:* I think to have a new dataset there should be some kind of new/different goal. Do you have field based ground validation? You can try coarsening the 10cm data we have in that benchmark to get a sense for accuracy. I'm not sure individual tree segmentation can be rigorously done at 30/50cm. Really really hard.

Ben Weinstein (benweinstein2010@gmail.com)
2023-01-24 13:39:14

*Thread Reply:* We strongly suspect that the 80% accuracy of deepforest (https://deepforest.readthedocs.io/en/latest/landing.html) at 10cm is limited by human annotators inability to do better than that. We have cross validated across many observers, included people in the field with tablets (https://peerj.com/preprints/27182.pdf). We are really unsure if we can do better with hand labeling.

Ben Weinstein (benweinstein2010@gmail.com)
2023-01-24 13:40:59

*Thread Reply:* There are hundreds (hundreds) of papers on this, many include zenodo links. Some big ones are going to get integrated here when I have a second. https://github.com/weecology/DeepForest/issues/340

Comments
7
Ben Weinstein (benweinstein2010@gmail.com)
2023-01-24 14:14:38

*Thread Reply:* Also check out large work being done https://openforestobservatory.org/ by Derek Young. I spoke with him a couple weeks ago and he was interested in contributing data.

openforestobservatory.org
🙌 Emily Lines
Emily Lines (erl27@cam.ac.uk)
2023-01-24 15:40:11

*Thread Reply:* Thanks @Ben Weinstein we'll look into all of those ☺️

Ben Weinstein (benweinstein2010@gmail.com)
2023-01-24 15:41:44

*Thread Reply:* Happy to zoom whenever if I can be helpful.

Ben Weinstein (benweinstein2010@gmail.com)
2023-01-24 15:44:57

*Thread Reply:* I also know that the world resource institute has been doing a ton of this work for labeling/validation. By coincidence we are talking this week about data through https://www.globalforestwatch.org/. I'll know more next week.

globalforestwatch.org
Matt Allen (mja78@cam.ac.uk)
2023-01-24 15:46:18

*Thread Reply:* Thanks for all this, appreciate it - apologies for being slow to reply; it's getting (somewhat) late here 😅

Ben Weinstein (benweinstein2010@gmail.com)
2023-01-24 15:46:35

*Thread Reply:* no rush.

Josh Veitch-Michaelis (j.veitchmichaelis@gmail.com)
2023-01-24 19:33:58

*Thread Reply:* We [Restor/ETH] have around 5k 2kx2k images @ 10cm with instance segmentation labels, and we're currently field trialling. Images are from global drone data, so we generalise pretty well. Hopefully this will also be published sometime in the near future. Validation is by far the hardest problem, so we're working with partners to get ground truth where we can (it's still incredibly hard to do at the tree level, beyond asking people to eyeball images to check that the model is correct).

We opted to not segment closed canopy, because we're sceptical that you can get accurate instance segmentation with RGB alone for exactly the reason that Ben mentioned. We basically treat it as a crowd class and have the model try and distinguish between 1 vs many (and you can maybe use a different downstream task to segment the closed areas).

The difficulty with relying on other data sources that would help you disentangle is that they're hard to get consistently from restoration practitioners. LIDAR is pretty rare, though there is some evidence you can do OK with crown extraction from photogrammetry. Ideally we're targeting something that will work reasonably well on high-resolution satellite imagery which is mainly RGB (+ maybe NIR).

Would also be great to join a Zoom, I'd be happy to share where we're at.

Ben Weinstein (benweinstein2010@gmail.com)
2023-01-24 19:40:23

*Thread Reply:* Sounds great, let me know when they are available and we can add them. Our model was trained only on US data and it gets applied broadly, we have no way of knowing its performance at a global scale. If you can try the release model, that would always be of interest. One very strange caveat is that we have yet to find a LiDAR + RGB integration that outperforms RGB alone. NEON's LiDAR cloud is sparse (5 points /m), which makes sense for the huge area it covers, often more than 4 million trees. Again, given lack of field validation, hard to know if because we annotated in RGB, we do better in RGB. We often see papers that use very dense LiDAR coverage and get good results, but they don't really reflect large scale field usage. This was what designers at OpenForestObservatory and I were discussing. We had a one off paper https://ieeexplore.ieee.org/abstract/document/9387530 that looked at post-hoc HSI splitting of RGB crowns. We have always meant to get back to it, but spent too long focusing on species ID.

ieeexplore.ieee.org
Josh Veitch-Michaelis (j.veitchmichaelis@gmail.com)
2023-01-24 19:58:28

*Thread Reply:* Yes, I've spoken to other folks and they've echoed the LIDAR bit. Weird. Not something we've tried though and our images were sourced from Open Aerial Map who unfortunately don't provide surface models.

Currently I'm looking at options to QA our labels (paid-for and automated). We're also looking at crowdsourcing for label verification, something along the lines of "place a keypoint in the centre of every tree in the image". Our masks are already pretty good, but we have a lot of annotator confusion between single/mutiple trees. I think there is also promise in sub-classing the labels to try and force the models to learn better distinctions like pine/leafless/broadleaf.

It'd be great to have a chat about benchmarking (both our data on deepforest and vice versa). We do have a fair number of orthomosaics+DSMs from partner sites so one idea would just be to look at agreement between RGB model predictions and 3D model predictions. There are also a few datasets that can be used for tree count verification e.g. there's some post-disaster imagery over Tonga that has individual trees labelled (including species).

Josh Veitch-Michaelis (j.veitchmichaelis@gmail.com)
2023-01-25 06:19:36

*Thread Reply:* @Ben Weinstein I was speaking to Justdiggit today and the LIDAR question came up. Do you know what sort of pre-processing people have tried to integrate either LIDAR returns or DSMs?

Emily Lines (erl27@cam.ac.uk)
2023-01-25 06:28:37

*Thread Reply:* Thanks @Ben Weinstein @Josh Veitch-Michaelis we do have some TLS-segmented data in a few ecosystems and are interested in this issue of accurately identifying where there is ground verification. @Matt Allen has been looking at other methods of ground verification too (e.g. using trunk locations from DGPS/total stations)

Matt Allen (mja78@cam.ac.uk)
2023-01-25 08:03:59

*Thread Reply:* On the issue of the annotations being verifiable - I think the degree to which manual labels etc. are acceptable really depends on the ecosystem - for a lot of plantation type forests I think hand annotations should be fine. On ALS, I should clarify that I really meant drone-based ALS (ULS might be the right acronym) where the point density is much higher than say from scanners mounted to planes. In very dense canopy the labels should probably be derived from TLS, unless they can be verified in some other way. The aim is to see whether sensors derived from these labels can be reproduced using the RGB data alone. I don't think I would reasonably expect it to be spot on but it would be interesting quantify exactly how far off they are (similarly, could also see how close human annotators tend to be)

Josh Veitch-Michaelis (j.veitchmichaelis@gmail.com)
2023-01-25 08:09:48

*Thread Reply:* One thought we had was that if we can only delineate the denser areas (whilst also identifying standalone trees), then that's still a good step towards handing off to another algorithm/method. Simple/classical CV methods can work quite well in certain environments, but are very sensitive to background inclusion (e.g. if you give an area that has no trees in, they'll happily try and predict). If you can also constrain where to run a LIDAR/photogrammtry/DSM-based algorithm they should have fewer false positives.

I'm hoping that with some crowdsourcing we might be able to better quantify human annotator disagreement in different scenes (different biomes, urban vs natural, etc).

Sara Beery (sbeery@caltech.edu)
2023-01-24 11:42:04

Re: the discussions about making the slack more permanent, I would LOVE any ideas folks have! I've thought about this a bit but so far don't see any solution that is sustainable, without having a very expensive monthly payment

❤️ aruna, Katelyn Morrison, Michael Bunsen
Titus (titus@colossal.com)
2023-01-24 11:44:03

*Thread Reply:* Its not terribly expensive - you could start an ngo for AI for Conservation. It would cost a couple hundred bucks a year, but would require a little bit more overhead on your part.

👍 Sara Beery
Sara Beery (sbeery@caltech.edu)
2023-01-24 11:45:33

*Thread Reply:* Well, I've been thinking about it already. If there are folks with experience/cycles to help with this maybe reply in the chat and we can look into it?

Titus (titus@colossal.com)
2023-01-24 11:46:26

*Thread Reply:* Ive never done it for an ngo, but I've started a few LLCs throughout the years. Happy to chat about it and see if I can help.

👍 Sara Beery
Sara Beery (sbeery@caltech.edu)
2023-01-24 11:48:13

*Thread Reply:* @Aamir Ahmad expressed interest as well. I can ask Priya Donti, they just went through this process for CCAI. And maybe I can get help from MIT lawyers once I start in the fall.

👏 Titus, Katelyn Morrison
Graeme Phillipson (graeme.phillipson@bbc.co.uk)
2023-01-24 12:32:43

*Thread Reply:* There are other opensource/academic communities which use Discord instead of slack (I’m in the NeRF studio discord https://docs.nerf.studio/en/latest/ for example). Discord is very much more orientated towards social communities rather than professional (so has a weird look and lots of things about emojis and gifs), but the past messages are free. Of course given the number of people already in the slack instance I’m sure people would rather not migrate!

docs.nerf.studio
Silvia Zuffi (silvia@mi.imati.cnr.it)
2023-01-24 13:54:16

*Thread Reply:* How much would the fee be? Could we find some sponsorship?

Jose Ruiz-Munoz (jfruizmu@unal.edu.co)
2023-01-24 14:20:15

*Thread Reply:* Pro: $7.25 USD per person/month, when billed yearly $8.75 USD per person/month when billed monthly 😲

😧 Tiziana Gelmi Candusso
Silvia Zuffi (silvia@mi.imati.cnr.it)
2023-01-24 14:21:18

*Thread Reply:* Thanks, that is a lot indeed!

Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-01-24 15:39:19

*Thread Reply:* For a group of our size, we might need Business. With 85% discount, in € I calculate about 2300€/month for our ~ 1300 strong group. Plus, our group keeps growing!

😧 Tiziana Gelmi Candusso
Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-01-24 15:42:17

*Thread Reply:* But it might be worth contacting the Slack sales team to get a fair idea regarding pricing? @Sara Beery as the creator of the workspace, perhaps you could drop in a query?

🙌 Kakani Katija
Jose Ruiz-Munoz (jfruizmu@unal.edu.co)
2023-01-24 16:00:09

*Thread Reply:* No sure if this might apply https://slack.com/help/articles/206646877-Apply-for-the-Slack-for-Education-discount

Slack Help Center
Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-01-24 16:40:40

*Thread Reply:* Indeed, this is the discount I relied on 🙂

👍 Jose Ruiz-Munoz
Pen-Yuan Hsing (penyuanhsing@posteo.is)
2023-01-25 07:22:59

*Thread Reply:* For what it's worth, I've had success with Element (which runs on the Matrix network) for collaboration within a 3-year big EU-wide research project, and use it extensively in various other communities.

Uniquely, it's entire tech stack is 100% open source which is more socially responsible, and it has seen adoption by the French and German governments for internal communications, so it's not just another random app.

The free-of-charge personal tier is sufficient for our needs, and because it is fully open source there is always to option to self-host.

I suspect most would prefer to remain on Slack so as to not create another account. But for cases where we start a new community elsewhere, IMO this is much better - ethically and socially speaking - than Discord.

👍 Dan Stowell, Sara Beery, Tiziana Gelmi Candusso, Yseult Hb, Jon Van Oast
Kakani Katija (kakani@mbari.org)
2023-01-25 13:33:31

*Thread Reply:* @Sara Beery I just hired someone for OVAI who has a degree in non-profit administration, and may be similarly having to set up a non-profit. I'll know more in two weeks and happy to compare notes

🙌 Sara Beery
👍 Silvia Zuffi
Bourhan (bourhan@rfcx.org)
2023-01-26 18:33:27

*Thread Reply:* Hey everyone! Bourhan here, CEO of the non-profit Rainforest Connection (rfcx.org), I think the process of incorporating a company and filing for the 501c3 status is quite extensive and lengthy especially if the primary goal is to get a subscription discount, etc. The IRS sometimes takes upwards of a year to approve a non-profit status, at times even longer.

Bourhan (bourhan@rfcx.org)
2023-01-26 18:36:33

*Thread Reply:* Couple of things I can help with. We currently have a free slack subscription for non-profits and our organization was selected this year to be part of the Salesforce accelerator. Salesforce (owners of Slack) offers a 2 year free subscription as part of the accelerator for us. Given that we already have the non-profit edition, we wouldn't have any use for it so we could possibly give that up for this workspace. The key would be to convince Salesforce to do that which I don't necessarily see any issues with, though we would definitely have to make the case to them.

Bourhan (bourhan@rfcx.org)
2023-01-26 18:36:48

*Thread Reply:* Let me know if y'all are interested in that and we can talk about the next steps.

Sara Beery (sbeery@caltech.edu)
2023-01-26 18:37:19

*Thread Reply:* Definitely interested! I'm happy to help make that case

Bourhan (bourhan@rfcx.org)
2023-01-26 18:39:50

*Thread Reply:* Awesome! I'll talk to our team at Salesforce about it. If you have any small write up on the purpose of this workspace etc... it would be super helpful

🙏 Silvia Zuffi, Dan Morris, Vijay Karthick
Sara Beery (sbeery@caltech.edu)
2023-01-25 13:41:28

New possibly relevant workshop announcement!

Call for Contributions: Workshop on Probabilistic Approaches in Weather and Climate Science (8-9 May, Kigali)

We are excited to be hosting the first Workshop on Probabilistic Approaches in Weather and Climate Science (climate-workshop23.github.io) at AIMS, in Kigali, on the 8th and 9th of May 2023. In fields such as climate science, where the impact of inaccurate and untrustworthy predictions could be extreme, there is a need for probabilistic approaches that robustly quantify uncertainty. This workshop aims to bring together researchers across a range of disciplines who are interested in probabilistic machine learning methods to tackle pressing environmental issues. The workshop will provide an engaging forum for researchers to present and discuss their work, as well as the opportunity to hear from a number of keynote speakers working in the area.

We invite submissions of short abstracts (up to 2 pages plus references) for consideration as posters or short presentations during the workshop. Submissions that explore probabilistic methods in any area of weather and climate science are welcome. We reference a few areas of interest below: • Weather forecasting • Climate model emulation • Data assimilation • Extreme events • Constraining uncertainty in climate • Climate model ensembling • Hydrology • Clouds and aerosols • Air quality • Downscaling

Details • Submission link: https://forms.gle/CsT7GGnDQv6WAb2KASubmission deadline: 1 March 2023Acceptance notification: 8 March 2023 We particularly welcome submissions that focus on climate and weather related issues that affect lower and middle income countries. This workshop will not publish proceedings, and previously published work is welcome. If your abstract is accepted, we expect you to present your work in person at the event.

accounts.google.com
👍 Oisin Mac Aodha, Jon Van Oast
🙌 Stephanie O'Donnell, Mahir Patel, Hamed Alemohammad
Jon Van Oast (jon@wildme.org)
2023-01-25 13:46:02

*Thread Reply:* correct me if i am wrong, but isnt this the third (topic-relevant!) conference in kigali this year?? rwanda for the win! :flag_rw:

Kristina Kupferschmidt (kupfersk@uoguelph.ca)
2023-01-25 14:12:35

*Thread Reply:* @Cody Kupferschmidt

Sara Beery (sbeery@caltech.edu)
2023-01-25 17:36:03

*Thread Reply:* I think a lot of them are in conjunction with ICLR!

👍 Jon Van Oast
Jon Van Oast (jon@wildme.org)
2023-01-25 17:37:59

*Thread Reply:* yeah the proximity in date of this one would seem to be that. i think ICCB was in june or july. kinda cool coincidence i guess. maybe a long holiday in rwanda is in order? 😏

Josh Veitch-Michaelis (j.veitchmichaelis@gmail.com)
2023-01-25 18:55:07

Hi all, passing this opportunity on behalf of some friends and colleagues in Bonn. The abstract deadline has been extended to the 31st of Jan so there's still a week or so if you have something that you think would be good to submit. A full paper is not required at this stage. The scope is somewhat tangential to conservation, but I think the topics are extremely relevant to the work that we do and I'm sure some people here would find some overlap:

Bonn Sustainable AI conference 2023 Sustainable AI Across Borders

This is the second Sustainable AI conference organized by the Bonn Sustainable AI lab at Bonn University’s Institute for Science and Ethics (IWE). The focus of the first conference in June 2021 was to create a community of researchers in the space of Sustainable AI and to raise awareness on the topic. The second conference will focus on cross cultural perspectives to address the variety and scope of ethical issues on a global scale. An inspiration for this theme is to acknowledge the reality that certain countries play an integral role in the early production phase and the waste management but may never experience the benefits of AI.

• Environmental justice throughout the development and procurement chain of AI (exploitation of planet, minerals e-waste, land and water usage) • Social justice throughout the development and procurement chain of AI (exploitation of human labor) • AI and (post)-colonialism, decolonizing AI, and digital sovereignty • AI and global healthcare • Social and environmental impacts of AI on urbanization • AI and socio-technical imaginaries/AI narratives • AI and gender • AI and Economy • AI and agriculture (food production, biodiversity) • AI and manufacturing • Conceptual foundations of sustainability https://sustainable-ai-conference.eu/

😎 Timm Haucke, Jon Van Oast, Ronan Wallace, Kristina Kupferschmidt
Dan Stowell (dan.stowell@naturalis.nl)
2023-01-27 03:33:31

Hi all! I'm Dan and I work on AI methods for understanding animal sounds. I'm wondering which conferences/workshops you're planning to attend this year? I know all the acoustics events (& I can suggest some), but I'd like to know where you're likely to be this year...

👋 Ali Johnston, Oisin Mac Aodha, Stephanie O'Donnell, Sara Beery, Carly Batist, Dan Morris
Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2023-01-27 04:46:17

*Thread Reply:* We're organising a movement ecology workshop at ICCB in July which will include an ML focus. We also will be organising an acoustic one with ML elements at some point this year. Will share details about both when available, and they'll also be posted in the conservation tech event calendar on wildlabs - https://wildlabs.net/events

wildlabs.net
👍 Dan Stowell, Carly Batist, Jon Van Oast, Antonio Ferraz, Taiki Sakai - NOAA Affiliate
Sara Beery (sbeery@caltech.edu)
2023-01-27 11:19:38

*Thread Reply:* Hi Dan! Great to see you here 🙂 I'll definitely be at CVPR and ESA, and possibly ICCV. I'm considering attending ICLR as well.

👍 Dan Stowell, Stephanie O'Donnell, Carly Batist, Dhruv Sheth
👍:skin_tone_5: Ando Shah
Carly Batist (cbatist@gradcenter.cuny.edu)
2023-01-27 12:03:03

*Thread Reply:* I’ll be at ICCB! And wildlife society (Nov). Potentially IBAC

👍 Dan Stowell, Stephanie O'Donnell, Toryn Schafer
Dan Morris (agentmorris@gmail.com)
2023-01-27 18:37:06

*Thread Reply:* Definitely check out the #upcoming_events channel. Also +1 for ESA and the wonders of the Pacific Northwest.

👍 Dan Stowell
Carly Batist (cbatist@gradcenter.cuny.edu)
2023-01-27 18:41:18

*Thread Reply:* Damn…. do I need to get myself to Portland too??

🎉 Jon Van Oast
Dan Morris (agentmorris@gmail.com)
2023-01-27 18:32:52

I just created an "issue" (really somewhere between an "issue" and a "boring blog post") on the MegaDetector repo listing a zillion MegaDetector-related todo's we haven't gotten to, plus some half-baked ideas that have come up in email threads with users, with the intention of having a better place to point people who want to get involved in making contributions to the repo:

https://github.com/microsoft/CameraTraps/issues/331

I'm not sure we'll even have bandwidth to support a lot of external contributions right now (the activation energy for doing that is high, even though the ROI is very high), so I would take this with a grain of salt wrt this repo in particular.

But I would love to have a place to point colleagues who (a) are good at writing code and (b) want to get involved in open-source projects related to conservation (or sustainability more broadly). There are some great lists of sustainability-related OSS projects out there, e.g.:

https://opensustain.tech/

...but I can't tell from that list which projects would benefit from additional non-domain-expert software engineering or data science hands on deck. E.g., to pick one at random that I don't know anything about, the "Python Toolbox for the Evaluation of Soil Moisture Observations" looks awesome, but I have no idea whether someone who doesn't know anything about soil moisture observations could contribute.

So... does anyone have favorite OSS projects (or lists of projects) where there's a high prior on someone being able to make contributions without a ton of domain expertise? E.g. maybe a couple repos that do a really good job using the GitHub "good first issue" tag?

🙌 Sara Beery, Ștefan Istrate, Viktor Domazetoski, Josh Veitch-Michaelis, Jinsu Elhance, Stephanie O'Donnell, Lindsey Dukles
🐘 Peter van Lunteren, Jinsu Elhance
Caleb Robinson (calebrob6@gmail.com)
2023-01-30 01:14:10

*Thread Reply:* (self-advertising here 🙂) -- We're creating torchgeo, a "PyTorch domain library, similar to torchvision, providing datasets, samplers, transforms, and pre-trained models specific to geospatial data." to make it easy for non-domain experts to use geospatial data (mainly satellite and aerial imagery) / benchmark their cool models on geospatial datasets. An awesome first contribution for anyone is to add a dataset! • Here is the docs page to see which datasets we already have -- https://torchgeo.readthedocs.io/en/latest/api/datasets.html • Here's a list of datasets that we've put together that would be awesome to add to torchgeo -- https://docs.google.com/spreadsheets/d/1TU1T5RdVWBify6MZGVVJICOX3c7qkhfmSGN2luyFNVU/edit?usp=sharing Also relevant, here is our "Contributing" page (https://torchgeo.readthedocs.io/en/latest/user/contributing.html) that attempts to walk potential contributors through all the extra complicated looking stuff that happens in "Github Actions".

We (5 maintainers) are pretty active on github so feel free to open an Issue if you want to discuss the details of a dataset before or while you are implementing.

👍 Dan Morris, Ștefan Istrate, Aakash Gupta
🙌 Michael Bunsen, Carl Boettiger
🎉 Burak Ekim, Aakash Gupta
Michael Bunsen (notbot@gmail.com)
2023-01-30 15:41:50

*Thread Reply:* Thanks for this thread! I hadn't seen the long list of projects at opensustain.tech

But I have seen a number of ecological or domain-specific projects that could certainly use help by "general" python, database, devops and front-end engineers! Here are some ways that come to mind or that I have helped projects accomplish:

• Converting code in notebooks to python modules • Finding bottlenecks: writing faster database queries, adding caching layers, turning one-at-a-time steps into batch functions. • Containerizing code (docker), making the app portable to multiple environments, making it easier for more developers to on-board & contribute • Adding support for another platform (windows, linux, or perhaps R to python) • Writing tests, implementing automation, CI/CD, making things reproducible in general • Writing and improving documentation • Utilize packages, patterns and 3rd party integrations that are common in software world, but not in the science world (black, pep8 formatting, Sentry, NewRelic, GitHub Actions, AWS S3 open license storage, etc) • Updating dependencies and testing with those versions • Improving the management of data in general, perhaps switching to a database, or making it easier to share and retrieve large files or images. Removing large files & log files from the git repo. • Improving or creating a UI • Make the project easier to configure via environment variables, CLI parameters, etc. • Adding python type annotations • Changing hard-coded local absolute paths to relative ones 🙂 • Removing sensitive information from the repo & git history

🙌 Aakash Gupta, Carl Boettiger
Dan Morris (agentmorris@gmail.com)
2023-01-30 16:56:02

*Thread Reply:* Those all sound like optimal ways to get new engineers involved in the conservation space. Do you have pointers to specific projects?

👀 Carl Boettiger
Tjomme Dooper (tjomme@fruitpunch.ai)
2023-01-31 09:23:57

*Thread Reply:* The AI for Wildlife Lab at FruitPunch hosts 10-week Challenges accessible to anyone with some basic programming skills. There's domain experts involved, but usually in a stakeholder role and to give masterclasses to the participants. Most of the work is done by ML/DS enthusiasts with an interest in conservation, rather than conservationists with an interest in ML/DS.

Enrollment for an acoustic monitoring Challenge is open until Feb 17th. (AI for Forest Elephants) Usually a new Challenge in conservation launches every few months.

Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-02-02 09:11:04

*Thread Reply:* Please also check Omedna challenges: https://omdena.com/projects/ There are projects and local chapters which you can join. Some interesting ongoing projects are related to extreme weather forecasts, monitoring air quality and preventing wild-fires

Omdena | Building AI Solutions for Real-World Problems
Carl Boettiger (cboettig@berkeley.edu)
2023-02-02 15:35:19

*Thread Reply:* @Michael Bunsen ❤️ what a great list. I'm saving this for future reference. Do you have links to any specific examples you can share?

I'm particularly keen myself to become more familiar with streamlined / most used ways to do these things in python. having been in the R world a long time I'm familiar with which are better/more popular patterns for caching, unit tests, containers, dependency management, linting, documentation etc, but trying to cross over into python I rarely know where to start!

Dan Morris (agentmorris@gmail.com)
2023-02-10 12:59:00

*Thread Reply:* One more note to add here... I mentioned opensustain.tech in my original post; I noticed that they also have a really useful table where they index a lot of GitHub metadata, including whether each repo uses the "good first issue" tag:

https://airtable.com/shr9we419r2TkpLkc/tblfcyw3opQsmaqQj/viwtjrGUtJZG6yGBH?blocks=hide

Sorting that table in descending order by "good first issue" seems like an awesome way to find projects. Just the fact that a repo uses the "good first issue" tag at all is a decent proxy for "we can help on-board new devs".

Also FWIW there are not a ton of wildlife conservation projects there, so if folks here have OSS projects that might be interested in external contributors, add your projects!

Airtable
Michael Bunsen (notbot@gmail.com)
2023-02-13 19:52:05

*Thread Reply:* Here is one Python package for estimating how many beavers a given stream can support. I got excited about after it was mentioned in a published book about beavers ("Eager"). However when I went to go check it out I found it was impossible to run. Multiple file paths are hard-coded to a person's local workstation and it must be run within ArcGIS (also Windows only). It would be awesome for someone with knowledge about spatial data & python to re-work this package to run outside of arcgis (using shapely, etc). Or perhaps just port it to QGIS. https://github.com/Riverscapes/pyBRAT/

For Java developers, or big data engineers, it would be awesome to give GBIF some love! It's a big repository for biodiversity data from many sources (including iNaturalist). Any improvements to GBIF will help out many conservation efforts around the world. https://github.com/gbif

Catalog of Life & Checklist Bank are also big projects that need help. These are working hard to create harmonized taxonomies for all walks of life. As well as create tools for different organizations to merge and keep up with changing taxonomies. They have a newer javascript/html portal that needs help. https://github.com/CatalogueOfLife

Michael Bunsen (notbot@gmail.com)
2023-02-13 19:59:53

*Thread Reply:* One more! I think Android / iOS apps for DIY camera traps would be very helpful for many projects. Here is one I know if now that could use some help "BioLens" (maybe formerly "AutoMoth"). Perhaps someone can add support for Tensorflow Lite models?? https://github.com/bhostetler18/BioLens

Beckett Sterner (bsterne1@asu.edu)
2023-02-02 15:28:23

Artificial Intelligence and Conservation: Indigenous AI

Beckett Sterner (bsterne1@asu.edu)
2023-02-02 15:28:32

Seminar Overview Seminar Five - Artificial Intelligence and Conservation: Indigenous AI Indigenous people have been leveraging technology to achieve their conservation goals along with everyone else focused on conservation. However, there is a long history of exploitation and colonization of these communities that must be acknowledged and eliminated moving forward. Agreements like Free, Prior and Informed Consent and the U.N. Declaration on the Rights of Indigenous Peoples, and principles like the CARE Principles for Indigenous Data Governance, apply to data being gathered for AI efforts and must be considered. Furthermore ownership or authority over Indigenous and local data must be respected. This session will cover some of the amazing AI work being done by Indigenous groups and will touch on issues such as data sovereignty and how those impact the growth and application of AI techniques.

Speakers: Michael Running Wolf, Indigenous AI Mason Grimshaw, Earthrise Media

👍 Jon Van Oast, Carly Batist, Justin Kay, Sara Beery, Yseult Hb, Alessandra Sellini, Sasha Luccioni, Kristina Kupferschmidt
👀 Carl Boettiger, Sara Beery, Kristina Kupferschmidt
Clare Price (theclareprice@gmail.com)
2023-02-10 16:44:51

*Thread Reply:* Hello! I was wondering if this was recorded? I would love to listen and learn, but just missed the seminar. Thanks!

Nora Gourmelon (nora.gourmelon@fau.de)
2023-02-03 07:57:54

Hi all,

the AI-newcomer award is granted by the German Association of Computer Science (Gesellschaft für Informatik) to young researchers under 30 years for innovative developments in the area of artificial intelligence. I'm one of three finalists for the award this year in the field of natural and life sciences. If you would like to support me and make research at the interface of sustainability and AI more visible, you can vote for me at https://kicamp.org/ki-camp-2023/ki-newcomerinnen-2023/ If you would like to take a look at my research, visit my profile on my institute's website: https://lme.tf.fau.de/person/gourmelon/

Best, Nora

🏆 aruna, Pen-Yuan Hsing, Vincent Christlein, Viktor Domazetoski, Alan Papalia, Timm Haucke, Tiziana Gelmi Candusso, Nicolas Arrieta Larraza
🎉 Jon Van Oast
Tiziana Gelmi Candusso (tiziana.gelmi@gmail.com)
2023-02-03 16:12:28

*Thread Reply:* just voted, good luck with the grant application!

🤗 Nora Gourmelon
Maxime Cauchoix (mcauchoixxx@gmail.com)
2023-02-08 09:29:48

Hello everyone, we setup a special issue to take some time to thing more broadly about the pro and cons of automation in conservation and ecology https://www.frontiersin.org/research-topics/53052/can-technology-save-biodiversity . Feel free to propose a contribution! Thanks!

Frontiers
🎉 Jon Van Oast, Yseult Hb, Ando Shah, Maxime Cauchoix, Carl Boettiger, Pen-Yuan Hsing
Dan Morris (agentmorris@gmail.com)
2023-02-10 17:38:23

I just occurred to me that although I know what many of the folks on this Slack do (for work), I haven't really "met" (even in the 2023 sense of the word) many of you, e.g. I have almost no idea what even the "regulars" that I chat with here all the time do for fun. I assume that everyone is also into '80s rock and ping-pong and golden retrievers, but I don't know that for sure. And you all seem really interesting. So, I'm declaring 7:30am PT (the best I can do, sorry East Asia and Australia) on Tuesday 2/21 "Conservation AI Coffee Time", and I'm going to be holding a cup of coffee and hanging out in a call and reading Quora, and anyone who wants to bring your coffee and talk to other AI4C folks, join in. I'll post a call link here the day before, but if anyone wants me to add you to the calendar event I made for myself, reply/DM/email and I'll add you.

Mmmm, coffee.

Note I originally posted a different day, then moved to Tuesday 2/21. I heard coffee tastes better on Tuesdays. It's just good science.

#general #upcoming_events

☕ Jon Van Oast, Peter Bull, Sara Beery, Chris Yeh, Rowan Converse, Elijah Cole (Deactivated), Carly Batist, Viktor Domazetoski, Jose Ruiz-Munoz, Kalindi Fonda, Risa Shinoda, Ritwik, Timm Haucke, Felipe Parodi, aruna, Andrew Schulz, Swayam Thakkar, Ștefan Istrate, Nora Gourmelon, Kristina Kupferschmidt, Josh Seltzer, Josh Veitch-Michaelis, Justin Kay, Mitch Fennell, Taiki Sakai - NOAA Affiliate, Michael Bunsen, Ronan Wallace, Valentin Lucet, Akash Jaiswal, Toryn Schafer, Yuerou Tang, Rajiv Pattni
🎉 Jon Van Oast, Peter Bull, Declan, Sara Beery, Chris Yeh, Carly Batist, Timm Haucke, Yseult Hb, Pen-Yuan Hsing, Josh Seltzer, Michael Bunsen, Rajiv Pattni
😅 Lukas Picek
🍵 Remi Gosselin, Yuerou Tang
❤️ Stephanie O'Donnell, Nora Gourmelon, Jon Van Oast, Michael Bunsen, Kasirat, Talia Speaker, Edward Bayes
Kalindi Fonda (kalindi.fonda@gmail.com)
2023-02-10 22:25:24

*Thread Reply:* Oh nice idea! 🥳 I actually came here today with the intention of posting my calendly and asking if anyone wants to chat and meet 1:1, but I'll wait until after the group coffee. And maybe then if there are more people who are interested in something like this we can make a random-coffee-partner channel. 🌱 (and yes please I'd love to be added to the calendar invite kalindi.fonda@gmail.com)

❤️ Carly Batist
Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2023-02-14 13:42:50

*Thread Reply:* Love this idea dan! Incidentally... this just so happens to be the half hour before our Feb Variety Hour, so you can roll into an all our conservation tech gathering if you get excited! https://wildlabs.net/event/variety-hour-february

wildlabs.net
Dan Morris (agentmorris@gmail.com)
2023-02-14 13:50:08

*Thread Reply:* OMG, I somehow totally missed that, even though I literally searched for one and only one thing to not schedule over, and it was the WILDLABS variety hour. For some reason I just missed this. I 100% do not want to schedule over 30 minutes of that, and two hours of community-ing is a lot, so I'm going to move this hypothetical AI4C coffee a day earlier. My bad!!!

Editing my original post to change to Tuesday 2/21. Sorry!

❤️ Stephanie O'Donnell
Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2023-02-14 13:50:35

*Thread Reply:* Well 100% my fault as I literally only just published it

Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2023-02-14 13:51:51

*Thread Reply:* but as it's apparently helpful - future reference, our event schedule is as follows 😂

Stephanie O'Donnell (stephanie.odonnell@wildlabs.net)
2023-02-14 13:54:05

*Thread Reply:* March 29, April 26, May 31, June 28, July 26, August 30, Sep 27, oct 25, Nov 29. Always the last wednesday of the month. AND - if anyone would like to speak about their work at any events or has someone we should feature, please let me know!

Daniel Velasco (daniel.elias.velasco@gmail.com)
2023-02-15 10:40:06

*Thread Reply:* This sounds like fun! I’d love to join: Daniel.Elias.Velasco@gmail.com

Dan Morris (agentmorris@gmail.com)
2023-02-20 16:31:17

*Thread Reply:* Meeting link for tomorrow's (Tuesday 2/21) AI4C Coffee "event" @ 7:30am PT:

meet.google.com/nib-pjiy-dwc

"Event" is in quotes because, as a reminder, I have no plan at all for this hour. This is not like a WILDLABS event where there are well-curated games and sound effects, this is just "here's a meeting link, YMMV". For the same reason, there is a distant possibility that we exceed the number of participants in a regular meeting, in which case... see above, YMMV.

That said, here's my ask to everyone who joins: when you join the call, post one sentence in the chat about what you do for work, and two sentences in the chat about what you do for fun. If there is awkward silence, I will pick someone at random and ask more about your hobbies. If it is a total cacophony, I won't try that hard to fix it, but it wouldn't hurt if others were ready to spontaneously create their own meetings for side chats, e.g. if I find someone who only wants to talk about Fender basses, I won't hesitate to talk about Fender basses for an hour, so eventually you'll all have to say "hey, Dan, take this to a side chat".

YMMV!

👍 Daniel Velasco, Kalindi Fonda, Sara Beery, Peter Bermant, Taiki Sakai - NOAA Affiliate, Yseult Hb, Ștefan Istrate, Akash Jaiswal, Risa Shinoda
🎉 Carly Batist, Sara Beery, Michael Bunsen, Stephanie O'Donnell, Viktor Domazetoski, Nora Gourmelon, Jon Van Oast
🎸 Matt Weldy, Michael Bunsen, Nora Gourmelon
😂 Stephanie O'Donnell, Talia Speaker
👋 Eelke
Dan Morris (agentmorris@gmail.com)
2023-02-21 11:27:06

*Thread Reply:* Thanks everyone for joining! We learned that AI for Conservation folks like to climb, and that underwater rugby is a thing.

❤️ Stephanie O'Donnell, Sara Beery, Jon Van Oast, Suzanne Stathatos, Akash Jaiswal, Carly Batist
🧗 Taiki Sakai - NOAA Affiliate
🎉 Jon Van Oast
☕ Jon Van Oast
Daniel Velasco (daniel.elias.velasco@gmail.com)
2023-02-21 11:27:39

*Thread Reply:* Fun meeting. it was great meeting you all!

👍 Dan Morris
Kalindi Fonda (kalindi.fonda@gmail.com)
2023-02-21 11:30:21

*Thread Reply:* 🌊 Oh yes underwater rugby 💙, here's the map with the teams: https://www.uwrmap.com/ let me know if you are actually interested in trying it out and I can connect you with the people, but even if you just show up, people are usually very welcoming.

🙌 Suzanne Stathatos, Dan Morris
Jon Van Oast (jon@wildme.org)
2023-02-21 12:08:17

*Thread Reply:* oh drat. only just now noticed this got moved ahead to today. 😭 very sorry to miss you all. i guess i will wait til next one. (at least i have consolation that i have met some of you, including dan, in person, right? wahhh) 😁

👍 Michael Bunsen, Dan Morris, Timm Haucke
Suzanne Stathatos (suzanne.stathatos@gmail.com)
2023-02-21 13:12:10

*Thread Reply:* Thanks for organizing, Dan! If anyone wants to chat about marine stuff 🐟 or hiking 🧗‍♀️, don’t be a stranger

🎉 Jon Van Oast
🐋 Taiki Sakai - NOAA Affiliate, Dan Morris
Peter Lawrence (peter.lawrence@cumbria.ac.uk)
2023-02-13 13:04:10

Hi all, really happy to join. I have a working knowledge of AI in various conservation settings but hoping to develop some clearer projects and will be on the hunt for collaborators 🙂

👋 Sara Beery, Sankaran (shun-ka-run), Daniel Velasco
Sara Beery (sbeery@caltech.edu)
2023-02-13 15:57:25

New NASA Applied Remote Sensing Training Program (ARSET) relevant to this community: NASA ARSET has opened a new open, online intermediate webinar series: Biodiversity Applications for Airborne Imaging Systems. This four-part webinar series will focus on NASA Earth Observations (EO) that can be used to characterize the structure and function of ecosystems and to measure and monitor terrestrial and aquatic biodiversity.

If you have any questions, feel free to email natasha.r.johnson-griffin@nasa.gov. More info on the training and how to register for free can be found here: https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgo.nasa.gov%2F3DkCnvs&data=05%7C01%7Ccbradley%40biologicaldiversity.org%7C64efe90aaab84844613b08db0b93084b%7C95c0c3b8013c435ebeea2c762e78fae0%7C1%7C0%7C638116498368338551%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=R%2BPkvKOgpk%2FS8EswcwjlMjwVfHqn5b8tbFqPijmfhT4%3D&reserved=0

appliedsciences.nasa.gov
‼️ Justin Kay, Timm Haucke, Ronan Wallace, Wenxin Yang, Mikey Tabak
❤️ Jon Van Oast, Carly Batist, Timm Haucke, Ronan Wallace, Alex Brace, Viktor Domazetoski, Cathy Atkinson, Lucia Gordon, Talia Speaker, Kristina Kupferschmidt, Ando Shah
📡 Ronan Wallace
Justin Kitzes (justin.kitzes@pitt.edu)
2023-02-16 11:19:33

Hi everyone, our group is once again searching for one or more Research Assistants to contribute to our bioacoustics research projects. A description of the opening and how to apply can be found here - https://www.kitzeslab.org/research-assistant-position-available/. Feel free to be in touch with any questions!

The Kitzes Lab
👀 Stephanie O'Donnell, Risa Shinoda
❤️ Carly Batist, Yseult Hb, Sara Beery, Clare Price
🐦 Taiki Sakai - NOAA Affiliate
👍 Sara Beery
Alasdair Davies (alasdair@shuttleworthfoundation.org)
2023-02-18 15:04:46

Hi all. It's world pangolin day! so no better a time to share the news that the Paul Allen Family Foundation have funded a six year programme called Operation Pangolin to do something serious to address their decline, starting in West and Central Africa. The good news for this community is that it will include the use of open source conservation technology, with Arribada leading on development, but importantly will aim to develop reliable in-field data streams for AI / ML ingestion upstream (think regular inference data from camera traps, acoustic recorders etc but on a regular basis). I hope many of you can get involved to help the humble pangolin not to be turned into medicine and ornaments. More here: https://gfjc.fiu.edu/operation-pangolin/index.html

arribada.org
Est. reading time
9 minutes
gfjc.fiu.edu
😎 Jason Holmberg (Wild Me), Ted Schmitt, Aamir Ahmad, Stephanie O'Donnell, Sara Beery, Talia Speaker
🎉 Jason Holmberg (Wild Me), Carly Batist, gvanhorn, Dan Morris, Cameron Trotter, Yseult Hb, Aamir Ahmad, Dhruv Sheth, Henrik Cox (Sentinel), Stephanie O'Donnell, Anton Alvarez
❤️ Viktor Domazetoski, Carly Batist, Dhruv Sheth, Joanna Turner, Henrik Cox (Sentinel), Stephanie O'Donnell, Rebecca Wilks, Anton Alvarez
🙌 Anton Alvarez
Maciej Adamiak (adamiak.maciek@gmail.com)
2023-02-20 04:16:14

Hi everyone 👋! I'm a machine learning engineer and a geographer specialized in computer vision, spatial analysis and remote sensing. If you have something interesting to do, be sure to let me know 🙂

👍 Jose Ruiz-Munoz, Josh Seltzer, Ben Weinstein, Daniel Velasco, Aleksis Pirinen
👋 Jon Van Oast, Sara Beery, Caleb Robinson, Dan Morris, Aleksis Pirinen
Daniel Velasco (daniel.elias.velasco@gmail.com)
2023-02-21 11:29:06

Hi, does anyone know of any resources (repos, datasets, open source models/tools) that one could use to learn more about bioacoustics in ML/deep learning? Thanks!

Carly Batist (cbatist@gradcenter.cuny.edu)
2023-02-21 11:36:09

*Thread Reply:* Hi! Some useful resources to look into - the Conservation Tech Directory, WILDLABS Acoustic Monitoring Group, and I’ve got some resources/lists of review papers/etc on my website 🙂.

conservationtech.directory
wildlabs.net
👍 Daniel Velasco
Taiki Sakai - NOAA Affiliate (taiki.sakai@noaa.gov)
2023-02-21 11:38:56

*Thread Reply:* Dan Stowell's paper is a good place to start

arXiv.org
💯 Carly Batist, Daniel Velasco, Yves Bas
Daniel Velasco (daniel.elias.velasco@gmail.com)
2023-02-21 18:15:18

*Thread Reply:* Thank you!

Carly Batist (cbatist@gradcenter.cuny.edu)
2023-02-21 15:59:45

Just saw that Appsilon released a new version of Mbaza AI - https://appsilon.com/mbaza-ai-update/

appsilon.com
Estimated reading time
5 minutes
🙌 Anton Alvarez
Andrzej Białaś (andrzej@appsilon.com)
2023-02-22 03:08:53

*Thread Reply:* Hey @Carly Batist, it was updated in October of last year but due to [many reasons here] I never got to write that up for our blog, we shared it with the current userbase. There is a lot of exciting stuff coming to Mbaza soon**, and once that happens I'll make sure to share it with the community better. Stay tuned and so on.

👍 Carly Batist, Sara Beery, Cara Appel, Anton Alvarez
Drea Burbank (drea@savimbo.com)
2023-02-22 18:45:16

Hey guys, super new here. I'm a project developer from the Colombian Amazon who uses GEE to help smallfarmers get paid directly for conservation with carbon credits. We're also launching a biodiversity credit with CME Group, Google/Ripple, etc. this spring. I joined the group because we have a weird ML we're working on. Photorecognition ML of iPhone panos taken by smallfarmers to identify forest integrity below the canopy. Think Google streetview for the Amazon. Curious to hear what you guys think of it.

👀 Josh Seltzer
👋 Maciej Adamiak, Marconi Campos
👍 Prach Sri
Jose Ruiz-Munoz (jfruizmu@unal.edu.co)
2023-02-22 19:37:30

*Thread Reply:* It sounds interesting to me. I work at a university in Colombia and have heard of some related efforts. Feel free to DM me if you would like to chat about it

Drea Burbank (drea@savimbo.com)
2023-02-25 12:18:41

*Thread Reply:* Yay thank you! Colombia has some amazing innovation happening in this space.

Prach Sri (prach@todreamalife.com)
2023-02-28 15:51:02

*Thread Reply:* Savimbo and KUNGFU.AI Partner to Bring AI to Rainforest Conservation

https://hubs.li/Q01zYZRt0

Savimbo
Dan Stowell (dan.stowell@naturalis.nl)
2023-02-23 04:39:15

A naive question for the "fine-grained" folks... how can I tell if my task is "fine-grained"? In some descriptions, it seems like any species-ID task would be called fine-grained. But maybe it's all relative? For example we're also working on individual-ID which is REALLY fine-grained!

👍 Burooj Ghani
Oisin Mac Aodha (macaodha@caltech.edu)
2023-02-23 04:46:48

*Thread Reply:* The term is often used and abused e.g. some species tasks are not necessarily fine-grained e.g. a duck versus a pigeon.

We often show the image below at our fine-grained workshop.

There is also a huge literature in human vision related to subordinate classification that you might have seen. https://link.springer.com/article/10.1007/s004260050047

SpringerLink
Dan Stowell (dan.stowell@naturalis.nl)
2023-02-23 04:53:44

*Thread Reply:* This image is useful, thanks - I like the way "instance recognition" is included, which is perhaps the best-defined level in the hierarchy - we don't want to generalise across instances, at that level. I still find unclear the distinction between the top 2 levels. Perhaps because human visual perception is not a relevant starting point for me! 😉

Oisin Mac Aodha (macaodha@caltech.edu)
2023-02-23 04:57:49

*Thread Reply:* Even instance is not totally ambiguous e.g. what if I buy two copies of the exact same chair ... But agreed, it is likely the most well defined.

👍 Dan Stowell, Sara Beery
Sara Beery (sbeery@caltech.edu)
2023-02-23 09:55:44

*Thread Reply:* I often think of it as a "how easy is it to tell these things apart" spectrum with no clear boundaries, that is somewhat individual rooted in each person's expertise/experience. For example, folks at the Lab of O find it VERY easy to tell apart bird species that I would easily lump together, and I'm pretty good at pretty niche stuff like matching ballet choreography to specific choreographers, etc. Course categories would be things that don't require any training to distinguish. But there's definitely cultural and regional differences in what those might be, which is interesting.

👍 Jon Van Oast
Jon Van Oast (jon@wildme.org)
2023-02-23 12:38:09

*Thread Reply:* curious which species you are working with re: individual id. we have quite a bit of experience with it over at wildme.org and i would definitely say it falls into "fine-grained".

Elijah Cole (Deactivated) (ecole@caltech.edu)
2023-02-23 12:39:11

*Thread Reply:* One nice attempt to formalize “granularity” in a machine learning context can be found here:

https://arxiv.org/abs/1912.10154

arXiv.org
👍 Oisin Mac Aodha, Dan Stowell, Sara Beery, Ben Weinstein
Dan Stowell (dan.stowell@naturalis.nl)
2023-02-23 12:40:20

*Thread Reply:* Thanks all! Jon - we're working on developing individual-ID methods that work across many species - so the focus is very much on the ML methodology. Having said that, we're working with a collection of different bird datasets, as well as various terrestrial mammals (...based on sound!)

😎 Jon Van Oast
👍 Jon Van Oast
Sara Beery (sbeery@caltech.edu)
2023-02-23 17:13:22

*Thread Reply:* @Peter Kulits is also digging into multi-species re-ID, maybe you two should sync up and see if the efforts are complementary?

🎉 Jon Van Oast
👍 Peter Kulits
Dan Stowell (dan.stowell@naturalis.nl)
2023-02-24 03:57:31

*Thread Reply:* Thanks! Elijah - thanks for the paper. I was actually quite surprised at their way of formalising the notion: essentially how neatly-clustered are the classes. I'll take some time to think about whether it's a good measure for us. Intuitively, I'm more drawn to the directly hierarchical formulation. Thanks

👍 Sara Beery
Howard L Frederick (simbamangu@gmail.com)
2023-02-23 10:55:55

Has anyone got experience with detecting large bird nests from aerial imagery?

Vulture nests are something we’d like to start looking for in our aerial survey imagery (oblique photos from aerial surveys of large mammals, image footprints ~ 150 x 200m). Some examples below - lots of questions around this, not least of which the paucity of training data. I wonder if we could / should photoshop some of these nests into other trees and backgrounds for training?

Carly Batist (cbatist@gradcenter.cuny.edu)
2023-02-23 10:57:05

*Thread Reply:* @Ben Weinstein

👍 Rowan Converse, Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2023-02-23 23:45:45

*Thread Reply:* happy help, but heading to Colombia. @Howard L Frederick message me again in a week or so. https://deepforest.readthedocs.io/en/latest/bird_detector.html

👍:skin_tone_4: Howard L Frederick
Andrzej Białaś (andrzej@appsilon.com)
2023-02-24 03:41:56

*Thread Reply:* 👋 Hi @Howard L Frederick - we are working on a model to automate the counting of cormorant (shag) nests in drone imagery from Antarctica, here's a summary of a quick POC we did last year: ➡️ https://appsilon.com/yolo-counting-nests-antarctic-birds/

Now last time I checked the final model was almost ready, and we should be able to share more on this soon. A paper is also in the works, but as with those, it's coming in several months at best.

Perhaps not 1 to 1 what you are looking for but I expect you can get some good insights and I'll be happy to pass any specific questions to our ML team and see if we can help (email me at andrzej@appsilon.com, DM here, or via LinkedIn).

appsilon.com
Estimated reading time
12 minutes
Howard L Frederick (simbamangu@gmail.com)
2023-02-24 04:17:14

*Thread Reply:* That's a very good start - will be in touch directly!

👍 Andrzej Białaś
Howard L Frederick (simbamangu@gmail.com)
2023-02-24 04:25:59

*Thread Reply:* I suspect the chaotic backgrounds from African savannahs will be problematic but this is a good basis to start with

Ben Weinstein (benweinstein2010@gmail.com)
2023-03-04 09:31:02

*Thread Reply:* okay, back from Colombia. Can you provide a single full size image and we can test our bird detector. It has been trained on 200,000 images, so you only need a tiny handful to customize to your situation. We tend to do annotations in QGIS. There is a video in the link above.

Adam Benzion (adam@edgeimpulse.com)
2023-02-23 12:21:38

Hi, I’m Adam from Edge Impulse. Helping SmartParks with ElephantEdge, AI powered tracking collars, the folks from Conservation X who created Sentinel, an AI powered nature camp, and WildLabs with wildlife research grants. Here to learn and help as I can. I help these organizations on a few projects and always want to learn and do more.https://techcrunch.com/2020/11/20/can-artificial-intelligence-give-elephants-a-winning-edge/

TechCrunch
Written by
Walter Thompson
Est. reading time
6 minutes
🐘 Suzanne Stathatos, Dhruv Sheth, Toryn Schafer, Ando Shah, Sara Beery, Sara Olsson, Timm Haucke, Ed Miller, Edward Bayes
❤️ Stephanie O'Donnell, Talia Speaker, Dhruv Sheth, Sara Olsson, Timm Haucke
🎉 Carly Batist, Dhruv Sheth, Dan Morris, Dante Wasmuht, Timm Haucke
👋 Ed Miller
Sara Beery (sbeery@caltech.edu)
2023-02-23 19:29:45

From Tilo Burghardt:

Bristol AI and Nature Week

A week full of “AI for Nature” talks with a top-notch opening event on Monday 6pm at the Great Hall by Tanya Berger-Wolf (USA) who is a pioneering scientist in AI for Conservation and Robert Dawes from the BBC about AI use in their Natural History Productions. Lots of stuff to attend for all interested in AI and Nature- the full event website is at http://people.cs.bris.ac.uk/~burghard/ai_nature_week .

🎉 Jon Van Oast, Justin Kay, Suzanne Stathatos, Graeme Phillipson, Timm Haucke, Kalindi Fonda, Andrew Schulz, Reshma Ramesh Babu
👍 Michael Bunsen, Risa Shinoda, gvanhorn, Timm Haucke, Nathan Fox, Shir Bar
🙏 Andrzej Białaś, Prabath Gunawardane
Thijs (thijs@q42.nl)
2023-02-24 03:10:05

I just want to share with you that from-code-to-conservation-a-nerds-quest-activity-7034794567472160768-O9hJ?utmsource=share&utmmedium=memberdesktop|my TEDx talk is online, you can watch it here!> 🎉

It's about how we are using smart cameras with machine learning to try and mitigate human-elephant-conflicts in Gabon.

Please let me know what you think of it! 🙏

linkedin.com
🙌 Peter van Lunteren, Stephanie O'Donnell, Carly Batist, Kalindi Fonda, Talia Speaker, Cathy Atkinson, Gracie Ermi, Sara Beery, Dan Morris, Ed Miller, Andrew Schulz, Shir Bar, Rita Pucci
❤️ Jon Van Oast
Thijs (thijs@q42.nl)
2023-02-24 08:34:13

*Thread Reply:* I look a bit scared in the YouTube preview image 😂

🐘 Kalindi Fonda
Peter van Lunteren (contact@pvanlunteren.com)
2023-02-26 06:16:47

*Thread Reply:* Great stuff! What kind of model do you use for detecting the elephants/persons? And how long does it take to process one image on the embedded device?

Thijs (thijs@q42.nl)
2023-02-27 04:32:47

*Thread Reply:* @Peter van Lunteren We use a TensorFlow Lite image classification model, it takes ~100ms to classify a single image.

Peter van Lunteren (contact@pvanlunteren.com)
2023-02-27 06:20:57

*Thread Reply:* I once (very briefly and not at all thoroughly) played with the idea of running the MegaDetector model on a raspberry pi with a camera and motion detector attached. The idea was similar, to detect poachers or animals. One of the problems was the processing speed on the embedded device (especially for such a huge model like MD). You’d have to create something like a queue and hope for not too much false positives. Or train your own light weight model, but I didn’t really want to give up accuracy. And the other thing is that the existing commercial camera traps will always be better and than some raspberry pi with a camera attached. Anyhow, long story short… good that you found a way to deploy this! Very impressive

Thijs (thijs@q42.nl)
2023-02-27 07:46:44

*Thread Reply:* Thanks Peter! I think there are some people that got the most recent version of MD running on a Pi. But I don't think this is an ideal solution.

We did use MD to preprocess out dataset and we basically trained our own model to detect elephants and humans. (Together with appsilon)

👍 Peter van Lunteren, Andrzej Białaś
🙏 Andrzej Białaś
Andrzej Białaś (andrzej@appsilon.com)
2023-02-28 10:09:21

*Thread Reply:* Good stuff @Thijs 💪 and cheers for the shout out!

👍 Thijs
Louis Moreau (luis.omoreau@gmail.com)
2023-02-24 04:01:30

Hi everyone, I am Louis, I lead the Developer Relations team at Edge Impulse. I started my career developing a low-power, GPS-based, and LPWAN IoT solution to track rhinos for an African conservancy (Zimbabwe). When not hacking hardware or automating every task I have to do manually more than 5 times, you can see me riding an electric skateboard in Lille, northern France, or scuba diving everywhere in the world 🦏 🤿 Happy to join this community! If you have questions about Edge Machine Learning, I'd be happy to help!

👋 Andrzej Białaś, Thijs, Stephanie O'Donnell, Timm Haucke, Carly Batist, Valentin Gabeff, Suzanne Stathatos, Dan Morris, Ed Miller, Andrew Schulz, Daniel Velasco, Josh Veitch-Michaelis, Sara Olsson, Dhruv Sheth, Nicolas Arrieta Larraza
Alasdair Davies (alasdair@shuttleworthfoundation.org)
2023-02-26 16:38:57

Hi @Louis Moreau, are you still based in France and work remotely for EI? I ask as we have a developer there and it's always nice to think about future in person meet ups

Louis Moreau (luis.omoreau@gmail.com)
2023-02-27 05:31:50

*Thread Reply:* Indeed, I am based in Lille, France and I work remotely for Edge Impulse 🙂

Tilo Burghardt (tb2935@bristol.ac.uk)
2023-02-26 18:52:14

Dear all,

just a last minute invite to next week's "Bristol AI and Nature Week" at https://camtrapai.github.io/ai_nature_week.html with lots of talks streamed online for anyone who would like to join....Tanya Berger-Wolf will be around and there will be lots of talks from AI for Conservation, Science Drones, Farming, Natural History TV Productions etc. There is also a new edition of the "CamTrapAI meets Ecology Workshop" running at https://camtrapai.github.io. Anyone welcome to tune in!

Best wishes, ---Tilo

camtrapai.github.io
camtrapai.github.io
👍 Ștefan Istrate, Devis Tuia, Timm Haucke, Valentin Gabeff, Kari Kuester, Andrew Schulz, Dan Morris, Nicolas Arrieta Larraza, Sara Beery, Risa Shinoda, Catherine Wang, Jan Kees, David
:thumbsup_all: Frederic Fol Leymarie
😎 Jon Van Oast
Nicolas Arrieta Larraza (n.arrieta.larraza@gmail.com)
2023-02-27 11:04:22

*Thread Reply:* Is it gonna be recorded? 🙂

Prach Sri (prach@todreamalife.com)
2023-02-28 15:49:12

Hi everyone, I’m with the project developer Savimbo,

We’re building a really cool #ML algorithm Measuring jungle from the ground with #smallfarmers. This is a super novel application of AI for conservation.

This project, which is only possible because of KUNGFU.AI, is super simple, and at the same time, vastly important. Smallfarmers scan the jungle with photographs, and the algorithm says whether that jungle is intact. The algorithm is intended to scale on-the-ground deployment of microeconomics that stop deforestation, but the dataset has wider applications for species identification.

Savimbo
kungfu.ai
🎉 Jon Van Oast, Sara Beery, Dan Morris, Jose Ruiz-Munoz, Yseult Hb
Jinsu Elhance (jelhance@gmail.com)
2023-02-28 19:54:56

Hello everyone!

I am interested in pursuing a Ph.D. in computer vision methods for generating higher-resolution hyperspectral imagery from remote sensing data with lower spectral and spatial resolution. I'm looking for advice and guidance from researchers in this field, and would be grateful for any pointers or insights you can offer.

Specifically, I'm interested in labs or groups that are currently working on this topic, and any relevant papers or resources you can recommend. My research interests include addressing the geospatial digital divide by improving access to high-resolution hyperspectral imagery, and I recently completed a project on combining multispectral imagery and synthetic aperture radar for mangrove species classification along Kenya's coastline.

I would greatly appreciate any discussion on this, and feel free to ask me any questions!

Rebecca Wilks (R.C.Wilks@sms.ed.ac.uk)
2023-03-01 07:30:58

*Thread Reply:* Hey @Jinsu Elhance, I'm currently a PhD student at Edinburgh/Leeds SENSE Centre for Doctral Training in Satellite Imagery for Environmental Applications, which may have projects you will be interested in.

While the projects are more application based, as opposed to focusing on creating general methods of super-resolution for satellite imagery, these kind of tasks are often baked into the processing work done for an ecological task e.g. improving satellite imagery to better identify land cover types

I believe the initial round of applications is closed for this year, however there are often a handful of projects available throughout the year, so if interested then this is the place to keep an eye on: https://eo-cdt.org/projects/

Patrizia Paci (pp4649@open.ac.uk)
2023-03-31 05:43:32

*Thread Reply:* I don’t know if at the group of Prof Jan van Gemert at TU Delft (Netherlands) they work on this specific topic, but they are cool guys and Jan might be interested in a new perspective. Perhaps, have a look at their publications to get an idea?

Daniel Velasco (daniel.elias.velasco@gmail.com)
2023-02-28 20:49:05

Hi, just wanted to check in here and ask if anyone in the chat applied to Meta’s 2023 AI residency? A few other applicants and I made a discord server for interviews if anyone here is interested. Thanks!

Aaron Ferber (aferber@usc.edu)
2023-03-01 00:03:32

*Thread Reply:* Hi! I would also like to join if possible!

Sara Beery (sbeery@caltech.edu)
2023-03-01 10:34:46

https://twitter.com/animaltracking/status/1630944261347725313?t=dQLwf4yDQWGhOnSadaKxw&s=19|https://twitter.com/animaltracking/status/1630944261347725313?t=dQLwf4yDQWGhOnSadaKxw&s=19

twitter
} Migration Dept., Max Planck Inst. Animal Behavior (https://twitter.com/animaltracking/status/1630944261347725313)
👀 Enis Berk Çoban, Viktor Domazetoski
😎 Jon Van Oast
Panayiotis Danassis (pdanassis@g.harvard.edu)
2023-03-01 15:18:05

Interested in Multi-agent Systems and AI for Social Good? We invite you to submit your work at AASG 2023, the 4th International Workshop on Autonomous Agents for Social Good, I am co-organizing with Bryan Wilder, Kayse Lee Maass, and Aparna Taneja! At AAMAS 2023, London, May 29-30 Topics of interest include

panayiotisd.github.io
❤️ Lucia Gordon
Tilo Burghardt (tb2935@bristol.ac.uk)
2023-03-02 09:58:57

Dear all,

starting in 5min at 10am EDT / 3pm GMT / 4pm Central Europe - 2nd CamTrapAI meets Ecology Workshop streamed live on Teams at https://camtrapai.github.io/join.html

See you there, ---Tilo

camtrapai.github.io
🙌 Stephanie O'Donnell, Jon Van Oast, Michael Bunsen
👍 Oisin Mac Aodha, Josh Seltzer, Valentin Gabeff
👏 Rita Pucci
Katie Millette (millettek@gmail.com)
2023-03-02 12:34:44

Hi everyone, GEO BON is organizing a Global Conference on Biodiversity and Monitoring this fall (10-13 October 2023), in Montreal, Canada. You’re all invited to this in-person conference. Themes to include, but certainly not restricted to, “AI for biodiversity change”.

Important dates: • Call for sessions: 1 – 31 March 2023 • Call for abstracts: 10 April – 14 May 2023 • Registration opens: April 2023 Proposals can be for Sessions, Workshops, Mini symposia, Fireside Chats or Panel Discussions. We hope to raise funds to support registrants from emerging and low-income countries, students/early-career.

More details to come. Happy to answer questions 🙂 https://geobon.org/geo-bon-global-conference-monitoring-biodiversity-for-action/

🙌 Stephanie O'Donnell, Sara Beery, Jon Van Oast, Justin Kay, Dan Morris, Josh Veitch-Michaelis, Emily Charry Tissier, Clare Price, Andrzej Białaś, MEIXI LIN
🎉 Carly Batist, Yseult Hb, Andrzej Białaś
🥳 Andrzej Białaś
Kakani Katija (kakani@mbari.org)
2023-03-03 13:02:56

Hi everyone! In case you want to watch a Stanford/Hopkins Marine Station seminar given by our own @Sara Beery, you can register here! I’m sure she loves it when we advertise her talks here. 😉 https://events.stanford.edu/event/sara_beery_computer_vision_for_global-scale_biodiversity_monitoring

Stanford University
👍 Oisin Mac Aodha, Gedeon, Silvia Zuffi, Andy Viet Huynh, Jan Kees, Jason Holmberg (Wild Me)
🎉 Jon Van Oast, Declan, Gracie Ermi, Suzanne Stathatos, Adam Noach, Viktor Domazetoski, Yseult Hb, Dan Morris, Tiziana Gelmi Candusso, aruna, Andrew Schulz, Michael Bunsen, Risa Shinoda, Malte Pedersen, Carly Batist, Hirokatsu Kataoka (AIST), Dhruv Sheth, Ted Schmitt, Andy Viet Huynh, Jason Holmberg (Wild Me)
Kakani Katija (kakani@mbari.org)
2023-03-03 13:12:11

*Thread Reply:* @Sara Beery if you’re in town, we should have you come and visit MBARI

Sara Beery (sbeery@caltech.edu)
2023-03-03 13:31:17

*Thread Reply:* Sure! I'm driving up on Thursday, and was planning to drive back Saturday. We could maybe do something Thursday afternoon?

Kakani Katija (kakani@mbari.org)
2023-03-03 14:05:23

*Thread Reply:* Let me follow up over email. It’s an especially tough week for me to organize things since I’m out of doggie-care, but it would be great to connect. More soon.

Sara Beery (sbeery@caltech.edu)
2023-03-03 14:30:17

*Thread Reply:* Sg!

James Withers (james.withers@bbc.co.uk)
2023-03-13 05:26:55

*Thread Reply:* Hi @Sara Beery, Thanks for the talk on Friday, It was really interesting to hear about what you’ve been working on. I was wondering if you would be able to share a link to the talk recording and/or the slides so I could look back at some of the things you mentioned and share with others. I was also really interested in the Alaskan fisheries project you mentioned and wondered if you might have a name or link for that?

Sara Beery (sbeery@caltech.edu)
2023-03-13 12:40:18

*Thread Reply:* I'll DM you my slides!

Here is the github repo for the sonar fish counting project: https://github.com/visipedia/caltech-fish-counting

Work led by @Justin Kay!!

Holger Klinck (hk829@cornell.edu)
2023-03-07 17:28:47

Hi all, I am excited to share that the BirdCLEF 2023 competition is now live. I would love to see some of you participate in it. If you have any questions, please don't hesitate to reach out. Here is the link: https://www.kaggle.com/competitions/birdclef-2023

🐦 Sara Beery, Jon Van Oast, Carly Batist, Ben Weinstein, Viktor Domazetoski, Enis Berk Çoban, Prabath Gunawardane, Fagner Cunha, Timm Haucke, Dan Morris, Elijah Cole (Deactivated), Oisin Mac Aodha, Agnethe Seim Olsen
🙌 Enis Berk Çoban, Fagner Cunha, Jose Ruiz-Munoz
Tyus Williams (tyusdwilliams@berkeley.edu)
2023-03-08 01:23:48

Hello everyone, I hope this message finds you all well. My name is Tyus Williams, I am a PhD student at UC Berkeley studying carnivore ecology and spatial ecology hoping to tackle some outstanding questions concerning feral cats and the influece human-dominated landscapes have on the occupancy of mesopredators in the East Bay Regional Shorelines of California. I'm slightly acquainted with, Sara Beery, who invited me to join the slack channel in my interest of harnessing the technological power of AI for the data management portion of my first chapter using camera trapping to observe the presence of wildlife species in response to anthropogenic stressors direct and indirect. While I don't have a CS background I know there has to be a better way than my current method of prefiltering photos (with the incredible assistance undergrads) based on defined criteria and eventually upload the sorted raw images onto wildlife insights and confirm the ID of the wildlife species I am interested in. From prior experience or expertise that you might have with camera trapping data management if you have any input on protocols or methods that can improve the speed in which I process image I would love to hear your input. People have mentioned megadetector, but I wonder if that is really any differnet from wildlife insights. If you have any questions please let me know.

Dan Morris (agentmorris@gmail.com)
2023-03-08 09:55:26

*Thread Reply:* There's no right answer; everyone's problem is a little different, and the optimal workflow depends on the details of your problem, your team, your institution's data policy, etc. But a couple links that may be helpful...

I try to keep a list of systems that exist (with or without AI, local or on the cloud) here:

https://agentmorris.github.io/camera-trap-ml-survey/

And we can usually help narrow down your choices if you can answer the questions here:

https://github.com/microsoft/CameraTraps/blob/main/collaborations.md#questions-about-specific-camera-trap-use-cases

It may be interesting to have that discussion here, but if that exceeds what's practical to type into the Slack reply box, feel free to email cameratraps@lila.science .

Camera Trap ML Survey
Antonio Ferraz (antonio.a.ferraz@jpl.nasa.gov)
2023-03-09 02:01:23

*Thread Reply:* Hi Tyus, interesting topic. What co-variates are you considering to characterize the human-dominated landscape that can help explain the predator’s occupancy?

Tyus Williams (tyusdwilliams@berkeley.edu)
2023-03-09 02:46:37

*Thread Reply:* Great question, I am looking at housing density, population density, road distance, vegetation cover, and residential housing distance. I'm trying to see if I can gather noise levels but we will see.

👍 Antonio Ferraz, Dan Morris
Jan Kees (jankees.schakel@sensingclues.org)
2023-03-09 14:43:19

*Thread Reply:* did you already check out TrapTagger? Your case sounds like a perfect fit

Toryn Schafer (tschafer@tamu.edu)
2023-03-09 12:51:37

Hello! I am supervising a master's level statistics capstone project on herpetological camera trap data. It's primarily a data analysis project (no novel methodological developments) using supervised CNN. I am hoping we can turn it into a short manuscript. If you have a suggestion for a journal or outlet, let me know! Thanks

Diego Marcos (diego.marcos.gonzalez@gmail.com)
2023-03-09 14:17:56

The GeoLifeCLEF23 competition, within FGVC and ImageCLEF, is also open! Submit you predictions for plant species presence/absence using satellite images and time-series before May 17th https://www.kaggle.com/competitions/geolifeclef-2023-lifeclef-2023-x-fgvc10/

kaggle.com
😍 Sara Beery, gvanhorn, Oisin Mac Aodha, Venkatesh Ramesh
👍 Leonardo Viotti, gvanhorn, Riccardo de Lutio, Nina van Tiel, Elijah Cole (Deactivated)
🌍 Oisin Mac Aodha
Andrzej Białaś (andrzej@appsilon.com)
2023-03-10 06:02:38

👋 Hey Folks!

📣 Exciting news! We just published 🔗a new article on Mbaza AI.

Mbaza AI featured here has already established a strong presence in Gabon and is soon expected to expand to Kenya (thanks to Ol Pejeta Conservancy).

More details below 👇

🤖 Ol Pejeta folks needed an accurate model to confidently automate the wildlife identification task; they provided us with a subset of manually labeled data from 2018 with images from 9 different cameras in 2 migratory corridors.

🦏 Ol Pejeta Conservancy is over 90,000 acres in central Kenya committed to conserving biodiversity, protecting endangered species, driving economic growth and improving the lives of rural communities.

Please let me know what you think in the comments! 😄

If you have any questions I’ll be happy to pass them on to our ML team, feel free to DM me 🥳 (and if you would like more news like this I have a cool newsletter 😎).

appsilon.com
Estimated reading time
11 minutes
👍 Stephanie O'Donnell, Cameron Trotter, Dan Morris, Ted Schmitt, Cara Appel, Sara Beery, Matt Hron, Aakash Gupta
🎉 Jon Van Oast, Emilio Luz-Ricca, Abhay
🙌 Anton Alvarez
Josh Veitch-Michaelis (j.veitchmichaelis@gmail.com)
2023-03-10 06:38:36

*Thread Reply:* Ol Pejeta is great, they're involved with several conservation tech initiatives. I helped out with an aerial census in 2020 for Savetheelephants just before lockdown. We flew thermal + viz cameras in a cessna, part of the novelty was assessing thermal for dawn/dusk. I was stunned to learn that digital imaging is still not the norm for surveying and that manual counts from the back of a plane is more common. I got a ride back over the Rift Valley with the passenger door off (because the cameras were mounted there) which was an interesting experience.

Photo is refuelling at the end of the day. Interestingly Ol Pejeta has an airstrip inside the park where tourists pay lots of money to fly into (there's also a larger airport in Nanyuki nearby). One of the few places where "elephant on the runway" is a valid reason to delay takeoff.

🙌 Andrzej Białaś
Jon Van Oast (jon@wildme.org)
2023-03-10 12:34:00

*Thread Reply:* was about to say the same thing: we work with ol pejeta and i have been lucky enough to have been there. very cool to see more tech helping out there -- congrats!

🙌 Andrzej Białaś, Jason Holmberg (Wild Me)
Andrzej Białaś (andrzej@appsilon.com)
2023-03-10 12:35:37

*Thread Reply:* Agreed, great folks there no doubt. ℹ️ We are working on packing the model and shipping it to Mbaza so it can be used. Soon 🙂

🎉 Jon Van Oast
Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-03-18 12:23:38

*Thread Reply:* Hi Andrzej - This is cool, congratulations! We are working on a similar tech, which is being deployed in two tiger reserves in India. Maybe we can connect and exchange notes.

🎉 Jon Van Oast, Andrzej Białaś
Andrzej Białaś (andrzej@appsilon.com)
2023-03-23 16:23:49

*Thread Reply:* Cheers @Aakash Gupta, and sure! Let's talk. Sent you a connection request via LinkedIn 🤝

👍 Aakash Gupta
Andrzej Białaś (andrzej@appsilon.com)
2023-03-24 04:12:33

*Thread Reply:* Hey Folks, me again!

Mbaza 2.1.0 with the Ol Pejeta model (described in the blogpost above) added is live! The installer links and instructions can be found in our GitHub.

We'll move on to helping Ol Pejeta team to make the best use of the tool, and later focus on bringing more features (andupdate manuals, training materials and so on, it's about time 😮‍💨). Anyway, stay tuned, exciting stuff on the horizon, but as usual, lots of work to be done first 🤞.

🙌 Sara Beery, Cara Appel, Dan Morris
❤️ Matt Hron, Jon Van Oast
Jaroslav Bezdek (jaroslav.bezdek@strv.com)
2023-03-12 10:27:50

Hello, fellow machine learning enthusiasts! 👋

My name is Jaroslav, and I’m a machine learning engineer with 5 years of experience. While I love my current job, I crave more meaningful work. As a result, I’m currently looking for a side job that allows me to make a positive impact and contribute to something meaningful. 🏞️ 🐘

If you or anyone you know is looking for a skilled and dedicated ML engineer to lend a hand, I’m available for up to 20 hours per week. Please note that my previous experience and projects can be found on my LinkedIn profile. Feel free to reach out to me via comments, DMs, or email - whichever is most convenient for you. 🙏

Looking forward to hearing from you!

👋 Sara Beery, Dan Morris, Stephanie O'Donnell, Timm Haucke, Andrew Schulz, Jason Holmberg (Wild Me)
😍 Sara Beery, Kalindi Fonda, Jason Holmberg (Wild Me)
👋:skin_tone_3: Pen-Yuan Hsing
🎉 Jon Van Oast, Jason Holmberg (Wild Me)
:thumbsup_all: Frederic Fol Leymarie, Jason Holmberg (Wild Me)
Magali Frauendorf (magali.frauendorf@slu.se)
2023-03-23 04:35:09

*Thread Reply:* Hi Jaroslav, maybe contributing to such as challenge would be an interesting option for you?! https://www.fruitpunch.ai/challenges/ai-for-european-wildlife

fruitpunch.ai
Jaroslav Bezdek (jaroslav.bezdek@strv.com)
2023-03-23 09:25:42

*Thread Reply:* Hello Magali! 👋 Thank you for the suggestion! I would rather find a project for longer cooperation, but I will definitely check fruitpunch! 🙂

George Darrah (george.darrah@systemiq.earth)
2023-06-07 07:29:56

*Thread Reply:* Jaroslav - one of the most exciting companies in the world is hiring in this space... https://basecamp-research.homerun.co/?lang=en

basecamp-research.homerun.co
🙏 Jaroslav Bezdek
Kristina Kupferschmidt (kupfersk@uoguelph.ca)
2023-03-16 10:34:24

👋 Hi all, I hope you're having a great Thursday!

I am working on a project where we need to create an object detection algorithm for individual leaves from overhead images. In order to do this we will have to manually identify and annotate our images.

Does anyone have any recommendations for tools that make manual annotation easy? Thank you in advance! ☺️ 🙏:skintone2:

Felipe Parodi (parodifelipe07@gmail.com)
2023-03-16 10:38:59

*Thread Reply:* What type of annotation? bounding box?

Lukáš Adam (lukas.adam.cr@gmail.com)
2023-03-16 10:41:51

*Thread Reply:* I used https://github.com/opencv/cvat to annotate turtles. But after annotating 1000 turtles, I saw them whenever I closed my eyes. Good luck with leaves 😄

👍 Kristina Kupferschmidt, Maciej Adamiak, Josh Veitch-Michaelis, Vincent Christlein
🍃 Kristina Kupferschmidt, Sara Beery
❤️ Felipe Parodi, Ando Shah
🐢 Andrzej Białaś, Cody Kupferschmidt, Yseult Hb, Lukas Picek
Kristina Kupferschmidt (kupfersk@uoguelph.ca)
2023-03-16 10:42:29

*Thread Reply:* @Felipe Parodi, yes I was thinking of using bounding box!

Felipe Parodi (parodifelipe07@gmail.com)
2023-03-16 11:14:42

*Thread Reply:* i’ve found roboflow to be a great (free) bbox tool, and they have an AI-assisted labeler (once you label a handful of imgs)

Josh Veitch-Michaelis (j.veitchmichaelis@gmail.com)
2023-03-16 11:16:04

*Thread Reply:* If you have an edu account, look at Segments.ai they some neat assisted tools which are free for students/academics.

CVAT is also decent for free software - it's not as polished as the commercial offerings in my experience, but it's fine for bounding boxes. See also label studio, etc.

Eric Price (eric.price@ifr.uni-stuttgart.de)
2023-03-16 12:14:02

*Thread Reply:* if you are working collaboratively, there's a number of web based solutions where you can have a lot of annotators operate on the same dataset, such as label-studio (https://labelstud.io/ ) we used that in the past and it was quite useful, although setting up the task has a bit of overhead - definitely worth it for large datasets and many annotators

Label Studio
Ben Weinstein (benweinstein2010@gmail.com)
2023-03-16 12:38:23

*Thread Reply:* I also use label studio.

Ben Weinstein (benweinstein2010@gmail.com)
2023-03-16 12:38:37

*Thread Reply:* Out of pure curiosity, can we see a sample image with annotations?

Ben Weinstein (benweinstein2010@gmail.com)
2023-03-16 12:39:13

*Thread Reply:* what resolution can you see individual leaves? We've been talking about identifying trees from drone imagery to supplement ground data.

Dan Morris (agentmorris@gmail.com)
2023-03-16 22:27:04

*Thread Reply:* I recently needed to do some viewing and editing of bounding boxes in a simple, local app, and landed on this one:

https://github.com/mfl28/BoundingBoxEditor

I've been really happy with it. I think if I was doing a Big Serious Project I would second everyone's vote for Label Studio, but to just quickly edit a small number of boxes, and/or to preview a large number of boxes, BoundingBoxEditor has been quite good.

Stars
35
Language
Java
🙌:skin_tone_5: Ando Shah
Vincent Christlein (vincent.christlein@fau.de)
2023-03-17 03:27:23

*Thread Reply:* We used LabelBox in the past (fine for small projects otherwise it gets costly), but during the last two years, we used only CVAT.

Ephantus Kanyugi (ndungu.kanyugi@gmail.com)
2023-03-31 15:26:19

*Thread Reply:* CVAT is a great tool for polygon, polylines, bounding boxes and keypoint annotation. If you only need Bounding boxes, then LabelIMG is a great free tool for that. Incase you need someone to do the manual annotation I can help with that as well as that is my area of expertise.

Chirag Nagpal (chiragn@andrew.cmu.edu)
2023-03-17 14:22:39

Hello AI For Conservation 👋👋

I am Chirag Nagpal, a PhD candidate at CMU with an interest in Time-to-Event and Survival Analysis problems could be of immediate interest to wildlife conservation.

My methodological research has been applied to a large number of clinical applications and I also maintain a repository of reusable python tools for time--to-event and survival regression, https://autonlab.org/auton-survival/

I am interested in participating and seeking potential collaborations in this space !

☺️ Sara Beery, Lucia Gordon, Jason Holmberg (Wild Me), Katelyn Morrison
👋 Mark Goldwater, Jason Holmberg (Wild Me), Katelyn Morrison
Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-03-18 12:31:53

Hello! 👋 St. Patrick's Day greetings to you all☘️☘️

My name is Aakash Gupta, I am a climate entrepreneur and an AI4Good advocate. My team has developed an AI-enabled platform for bio-diversity estimation of the wild population. The platform is being deployed in two wildlife sanctuaries in India; viz. Kawal Tiger Reserve and Amrabad Tiger sanctuary. The system has processed more than 3.85mn camera trap events spread over 4 census years

This project has given me the opportunity to interact with a number of decision-makers and wild-life conservation experts in India and abroad. And their vision and passion for wildlife conservation and adopting new technology has inspired me. Look forward to meeting more like-minded researchers, entrepreneurs, and enthusiasts.

https://www.linkedin.com/posts/telangana-ai-missionai-activity-7041721797955788801-CFoS?utmsource=share&utmmedium=memberandroid|https://www.linkedin.com/posts/telangana-ai-missionai-activity-7041721797955788801-CFoS?utmsource=share&utmmedium=memberandroid

linkedin.com
🙌 Josh Seltzer, Lucia Gordon, Eddie Zhang, Sara Beery, Prabath Gunawardane, Felipe Montealegre-Mora, Jason Holmberg (Wild Me), Akshay Paruchuri
🎉 Jon Van Oast, Rebecca Wilks, Aditee Kumthekar, Jason Holmberg (Wild Me)
Ameya Patil (ameyapatil249@gmail.com)
2023-03-23 03:29:36

A GSoC equivalent for Earth sciences, it is only open for EU residents though https://climate.copernicus.eu/ecmwfs-code-earth-2023#:~:text=Code%20for%20Earth%20is%20an,and%20Destination%20Earth%20(DestinE).

🌏 Rebecca Wilks
Christoph Praschl (christoph.praschl@fh-hagenberg.at)
2023-03-23 11:02:19

Hello everyone 👋:skintone2:

I am Christoph Praschl, and I am an Assistant Professor for Computer Vision and Software Engineering at the University of Applied Sciences Upper Austria. Within my research I am mostly focusing on computer vision methodologies in the context of conservation. Currently, I am working on the drone based detection of animals in forests using airborne light field samples (http://www.bambi.eco/; unfortunately the page is currently available in german, but when there is time an english translation will follow ^^).

If you have any questions or see any potential for collaborations don't hesitate to contact me 😊

👋 Suzanne Stathatos, Majid Mirmehdi, Dan Morris, Eric Price, Adam Noach
Chris Lang (chrislang@ucsb.edu)
2023-03-23 12:46:32

Hi everyone!

My name is Chris Lang, I am a software engineer at the Benioff Ocean Science Laboratory working on a handful of applied marine science projects including some using machine learning (vision).

We're hiring a Software Data Engineer to support the Clean Currents Coalition and other marine technology projects at the Benioff Ocean Science Laboratory at UC Santa Barbara. Learn more about this opportunity or share it with your network here [https://recruit.ap.ucsb.edu/JPF02400]. The estimated salary range for this position is $90,000 up to $120,000. Applications are due April 7th, 2023. UCSB is an AA/EOE, including disability/vets.

recruit.ap.ucsb.edu
Apply by
Apr 7, 2023
Department
Marine Science Institute - Office of Research
🐳 Suzanne Stathatos, Sara Beery, Jaroslav Bezdek, Toryn Schafer, Yseult Hb, Felipe Montealegre-Mora, Chris Lang
🎉 Jon Van Oast, Declan, Dan Morris, Sara Beery, Chris Lang
Chris Lang (chrislang@ucsb.edu)
2023-03-23 14:03:08

*Thread Reply:* The website incorrectly lists the anticipated start date as February. It will now be late May-June but will still be negotiable.

Steve Haddock (haddock@mbari.org)
2023-03-23 17:57:08

set the channel topic: Contact info to join: aiforconservation@gmail.com

Steve Haddock (haddock@mbari.org)
2023-03-23 17:59:50

^^ <sorry for the presumption of changing the channel topic, but it was the default slack message, and I thought this might be more useful>

👍 Sara Beery
✔️ Jon Van Oast
Akash Jaiswal (akash10987@gmail.com)
2023-03-24 08:00:33

Hi. Does anybody know about any opensource unsupervised clustering algorithm to group sounds of similar pattern together for further manual annotation?

Ben Weinstein (benweinstein2010@gmail.com)
2023-03-24 12:26:15

*Thread Reply:* We were just talking about this yesterday to add to a grant proposal, post if you find something. (@Sara Beery). Can you describe what you hope to do with it, just for me to have a sense of workflow.

👍 Akash Jaiswal
Sara Beery (sbeery@caltech.edu)
2023-03-24 14:50:34

*Thread Reply:* I know Tom Denton at Google has been doing some flavors of this (with query-based structure) but I don't know that they have anything open source.

👍 Akash Jaiswal
Sara Beery (sbeery@caltech.edu)
2023-03-24 14:50:41

*Thread Reply:* @Holger Klinck, do you and the BirdNet team do anything along these lines?

Holger Klinck (hk829@cornell.edu)
2023-03-24 15:20:47

*Thread Reply:* Yeah, a little with Tom. But nothing open source yet - first we need to generate some reasonable results :)

👍 Sara Beery
Matt Weldy (matthewjweldy@gmail.com)
2023-03-24 16:28:53

*Thread Reply:* I've been using some distance based queries lately that are working fairly well. Feel free to message me and I will share some of the initial stuff I have running. There is also the ScaNN and Milvus libraries.

👍 Sara Beery, Akash Jaiswal
Akash Jaiswal (akash10987@gmail.com)
2023-03-25 03:12:13

*Thread Reply:* @Ben Weinstein Currently, I am dealing with bird species identification in a subset of my recordings for a paper. And it's overwhelming (particularly for a place like Delhi which has so many bird species and lots of them have very large repertoire). I am also planning to do a fine scale annotation in those subset for some ML based work. I know Kaleidoscope (Wildlife Acoustics) has such clustering function. But don't know about any open-source for such utility. So, for my use, it can help with two things; 1) to estimate a direct count of sonotype richness in a sound file and 2) fine-scale manual annotations of those sounds pattern which can be further used for developing a deep learning model for species identification.

Vincent Christlein (vincent.christlein@fau.de)
2023-03-27 05:56:08

*Thread Reply:* Maybe some of the works of Christian Bergler (e.g. Animal-spot) may be worth having a look at: https://github.com/ChristianBergler

Repositories
8
Followers
14
👍 Akash Jaiswal, Matt Weldy
Akash Jaiswal (akash10987@gmail.com)
2023-03-27 08:25:01

*Thread Reply:* @Vincent Christlein Thanks for sharing.

Vincent Christlein (vincent.christlein@fau.de)
2023-03-27 10:00:11

*Thread Reply:* you're welcome

Ben Williams (ben.williams.20@ucl.ac.uk)
2023-03-27 13:55:32

*Thread Reply:* Interested to follow this. In the past for similar problems I've used something like YAMNet or PANNS for feature extraction, UMAP to reduce dimensions, then clustering with affinity propogation where you don't need to designate the number of clusters, in theory the clusters should group similar sounds. Sure there are more up to date/bespoke approaches like those shared above!

👍 Akash Jaiswal
Prach Sri (prach@todreamalife.com)
2023-03-24 16:16:18

Does anyone have experience in carbon economics.?

I’m totally stumped on a carbon calculation using allometric model for AGB from DBH. The Colombian national standard isn’t working and I don’t see why.

Thanks in advance!

Sara Beery (sbeery@caltech.edu)
2023-03-24 16:37:37

*Thread Reply:* @David?

👍 David, Prach Sri
David (dwddao@gmail.com)
2023-03-24 22:54:06

*Thread Reply:* What allometric equations are you using? Is the intention to calculate AGB for a forest carbon credit?

Prach Sri (prach@todreamalife.com)
2023-03-27 15:14:24

*Thread Reply:* We did a carbon study to compare with FRELs. I’m using Chave Model Type II - Tm

ln(AGB) = a + b1ln( D )+ b2(ln( D ))2+ b3(ln( D ))3+ d ln(q)

Diameter (D) in cm and density (q) in g/cm3. The parameters for a and b are also provided in the attached paper.

Happy to review it anytime!

David (dwddao@gmail.com)
2023-03-28 12:28:09

*Thread Reply:* Do you know how the FREL was calculated in Colombia, most likely through interpolations from satellites I almost assume? Did they use Chave et al (2005)? Maybe there are factors they integrated such as additionality etc into their net emissions? We have consistently met discrepancies with reported government data when using our own AI models. Many reasons are political.

Prach Sri (prach@todreamalife.com)
2023-03-29 01:29:38

*Thread Reply:* The FREL estimates site Alvarez 2012. It appears they are using field sampling rather than satellite data. I’ve run the models for both Chave 2005 and Alvarez 2012 (using the provided best fit parameters) and get vastly different outputs (from each other, and the national average). I’m starting to feel the models don’t hold up…

David (dwddao@gmail.com)
2023-03-29 05:35:58

*Thread Reply:* Regarding AGB you can do following satellite-based sanity check and compare the aggregared AGB from Global Forest Watch: https://data.globalforestwatch.org/maps/e4bdbe8d6d8d4e32ace7d36a4aec7b93 Spawn, 2020: https://www.nature.com/articles/s41597-020-0444-4 Santoro, 2021: https://essd.copernicus.org/articles/13/3927/2021/essd-13-3927-2021-discussion.html

Their data is available publicly and globally and from my experience, for areas > 5ha, Santoro has been surprisingly close to our field-based experiments in Ecuador: https://arxiv.org/abs/2201.11192

This could help you distill a better signal maybe!

data.globalforestwatch.org
Nature
1
arXiv.org
🙏 Prach Sri
Rio Akbar (riosyahakbar36@gmail.com)
2023-03-27 23:47:44

Hey Guys! I have a small dataset of 50 images of feral cats. The images are grouped into the area of the body the images focuses on (eg. Right hind leg etc.) and thus has the pattern markings for that area. I’m doing a small project on individual identification of these small cats but my supervisor has advised to not use object detection models like ResNet or YOLOv5. Instead focus on identification. Does anyone have any suggestions for this task? I’ve had a look at I3S, not sure if its suitable or not. Also not sure of any other alternatives. If anyone could help, that would be great! 🙂

Vincent Christlein (vincent.christlein@fau.de)
2023-03-28 02:55:17

*Thread Reply:* I'd start fine-tuning a pre-trained model on your classes. With 50 images, you should augment a lot otherwise, you will overfit quite quickly

Rio Akbar (riosyahakbar36@gmail.com)
2023-03-28 02:58:05

*Thread Reply:* Thanks Vincent, do you have a suggestion for a pre-trained model I can use? What do you think about I3S?

Vincent Christlein (vincent.christlein@fau.de)
2023-03-28 03:00:53

*Thread Reply:* what are you referring to? https://github.com/daniel-brenot/I3S-Interactive-Individual-Identification-System-Desktop ? I never worked with that

Language
C++
Last updated
5 years ago
Sara Beery (sbeery@caltech.edu)
2023-03-28 11:14:52

*Thread Reply:* @Jason Parham

Jacob Ayers (jgayers@ucsd.edu)
2023-03-29 00:44:31

*Thread Reply:* How distinct are the markings between the individual cats? If you could quickly tell with the human eye, maybe something simple such as Hu Moments or basic PCA classification could be worth looking into.

Jason Parham (bluemellophone@gmail.com)
2023-03-31 15:41:37

*Thread Reply:* For now, I would suggest looking at IBEIS https://github.com/Erotemic/ibeis

Stars
37
Language
Python
Rio Akbar (riosyahakbar36@gmail.com)
2023-04-01 22:52:52

*Thread Reply:* Thanks @Jason Parham !

Dan Morris (agentmorris@gmail.com)
2023-03-28 15:18:16

The discussion on the #new_papers channel about drone/aerial wildlife datasets got me motivated to assemble a list of all the annotated drone/aerial wildlife datasets I'm aware of. Is there already a list like this? If not, what am I missing?

https://github.com/agentmorris/agentmorrispublic/blob/main/drone-datasets.md

I think I've vetted everything on that list to make sure the data can actually be downloaded.

If this list is useful, please no one take a dependency on that URL. :) It's a subset of what's listed on https://lila.science/otherdatasets, and I will find a way to make them co-exist more nicely. But I will not bother to do that if someone tells me that this is just a sad, broken version of a list that already exists somewhere else.

👀 Stephanie O'Donnell, Sara Beery, Hirokatsu Kataoka (AIST), Blair Costelloe, Mikey Tabak, Lucia Gordon, Edward Bayes, Mitch Fennell
🛩️ Rowan Converse
Dan Morris (agentmorris@gmail.com)
2023-03-28 15:19:44

*Thread Reply:* Also in doing that search I was surprised by how many unannotated data sets are out there, just orthomosaics that someone created in the process of doing a manual count survey, typically from USGS or US/state F&W groups. I did not include those, but those would be fun to index as well.

👍 Ben Weinstein, Sara Beery, Rebecca Wilks, Edward Bayes
Ronny Hänsch (rww.haensch@gmail.com)
2023-03-28 15:34:42

*Thread Reply:* maybe a bit of a stretch but I would count drone/aerial images for wildlife detection as EO data. So one could add it to the GRSS EOD

Sara Beery (sbeery@caltech.edu)
2023-03-28 17:25:30

*Thread Reply:* I'd say it's helpful to have a separate list for this, since it's a bit specific. Thanks Dan!

Rowan Converse (rowanconverse@unm.edu)
2023-03-29 15:43:48

*Thread Reply:* This is great-- thanks for putting this together! I may have some more additions after completing the survey of the RS/ML Community of Practice group.

Dan Morris (agentmorris@gmail.com)
2023-03-29 16:23:45

*Thread Reply:* Now I'm getting greedy, and I'd like to not only have a list, I'd like to get a slightly structured record for each dataset, with at least species, resolution, number and type of annotations, etc., plus a couple sample images, and - here's the stretch goal - a couple images where we've rendered an annotation onto an image (to make sure it's actually possible). It's fine if this is literally done manually (e.g. by opening the annotations in a text editor, and drawing a box on the corresponding image in Photoshop), but we can also live without this last step.

This is a bit of a pain, but if we were to divide and conquer, we could do this pretty quickly (like <1 hour), so... reply here if you're willing to sign up to do this for either a specific dataset on the list, or for N datasets that I randomly assign you.

If you're the owner of a dataset and you already have the answer, you still get all the Karma if you volunteer to provide this data for your own dataset!

If I get some takers, I'll send out a link to a spreadsheet.

🙋 Kalindi Fonda, Edward Bayes, Aakash Gupta, Josh Veitch-Michaelis
Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-03-30 00:36:10

*Thread Reply:* I'd like to volunteer for this. Let me know what needs to be done.

👍 Dan Morris
Zhongqi Miao (zhongqi.miao@berkeley.edu)
2023-03-30 14:37:27

*Thread Reply:* I would like to volunteer as well for one or two datasets.

👍 Dan Morris
Dan Morris (agentmorris@gmail.com)
2023-03-30 18:37:17

*Thread Reply:* Oh this is great, that's 5 volunteers, I'm almost off the hook for doing anything now. 🙂 I'll DM all of you if I don't have your email address, and send "assignments" by email. Thanks!

🌟 Kalindi Fonda
Dan Morris (agentmorris@gmail.com)
2023-04-12 12:51:03

*Thread Reply:* OK, mission accomplished, we huddled virtually offline and gathered standardized metadata, standardized sample code, and sample annotated images for all the datasets on that list:

https://github.com/agentmorris/agentmorrispublic/blob/main/drone-datasets.md

Thanks to @Zhongqi Miao, @Aakash Gupta, @Edward Bayes, @Kalindi Fonda, and @Josh Veitch-Michaelis for participating in this experiment.

Let us know what the list is missing! And if you're the owner of any of the datasets (I'm looking at you, @Ben Weinstein), let us know if we got anything wrong.

🎉 Edward Bayes, Kalindi Fonda, Rowan Converse
💯 Aakash Gupta
🚀 Carl Boettiger
Christoph Praschl (christoph.praschl@fh-hagenberg.at)
2023-04-25 16:12:44

*Thread Reply:* We are planing to also open source our data from our research project BAMBI in future including drone videos of wildlife such as deer, red deer, boar, chamois, … But this will probably still take some months. I will definetley come back to you regarding that, if you are interested 🙂

🦌 Kalindi Fonda, Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2023-04-25 20:44:27

*Thread Reply:* Definitely, we will want to integrate that in our airborne work.

👍 Christoph Praschl
Thijs (thijs@q42.nl)
2023-03-30 08:16:19

I just now got back from a project in Zambia to test a system to mitigate human-elephant conflicts. The rollout was successful and we already saw it in action when we were there. You can get an impression here: https://www.linkedin.com/posts/tsuijtentechforgood-wildlifeconservation-conservation-activity-7047177106161602560-OBhv?utmsource=share&utmmedium=memberdesktop|https://www.linkedin.com/posts/tsuijtentechforgood-wildlifeconservation-conservation[…]106161602560-OBhv?utmsource=share&utmmedium=memberdesktop

linkedin.com
Thijs (thijs@q42.nl)
2023-03-30 08:16:53

*Thread Reply:* I really curious to get feedback from people who also have a lot of experience working with elephants.

Thijs (thijs@q42.nl)
2023-03-30 08:18:52

*Thread Reply:* Currently we are using our AI-camera to detect the presence of elephants. But I would be really interested in other ways to detect elephants. For instance using audio: detecting their sounds / rumbles. Are there any experts here in that field? Detecting elephants by sound? 🎤 🐘

🐘 Suzanne Stathatos, Dan Morris, Sara Beery
😎 Jon Van Oast
Andrew Schulz (akschulz@gatech.edu)
2023-03-30 08:33:34

*Thread Reply:* there is quite a lot of work on different frequencies of calls and vocalizations of elephants. I can reach out and see if they have done detection.

Patryk Neubauer (patryk.neubauer@gmail.com)
2023-03-30 08:39:36

*Thread Reply:* You might want to check this out: https://elephantlisteningproject.org/

Elephant Listening Project
:thumbsup_all: Andrew Schulz, Thijs, Clare Price
Thijs (thijs@q42.nl)
2023-03-30 08:48:44

*Thread Reply:* Thanks @Patryk Neubauer so you happen to know anyone there?

Patryk Neubauer (patryk.neubauer@gmail.com)
2023-03-30 08:53:25

*Thread Reply:* Know is perhaps a big word 😄 , but Daniela Hedwig from ELP is a mentor on a project I'm contributing to

👍 Thijs
Clare Price (theclareprice@gmail.com)
2023-03-30 19:33:19

*Thread Reply:* Joyce Poole with Elephant Voices developed the Elephant Ethogram and has worked extensively on elephant acoustics!

elephantvoices.org
elephantvoices.org
:squirrel: Andrew Schulz
👍 Thijs
Michael Bunsen (notbot@gmail.com)
2023-03-30 20:09:11

Is anyone familiar with any open source mobile apps for collecting data in the field? Something akin to Avenza, or OnX but much simpler that we could modify for a specific field study. Basically looking for the ability to record locations offline, take photos, tag observations and add notes. It doesn't even need a map.

Jon Van Oast (jon@wildme.org)
2023-03-30 20:17:56

*Thread Reply:* i have only played with this a little bit (personally, not professionally) but it seems to have a decent user base and track record.

https://getodk.org/

getodk.org
Michael Bunsen (notbot@gmail.com)
2023-03-30 20:28:24

*Thread Reply:* Hey that looks promising thanks!

Michael Bunsen (notbot@gmail.com)
2023-03-30 20:31:56

*Thread Reply:* Also iNaturalist's upcoming Seek v2 app that uses React Native could be a candidate to modify

👍 Jon Van Oast, Jose Ruiz-Munoz
Catherine Villeneuve (catherine.villeneuve.9@ulaval.ca)
2023-03-31 09:34:02

*Thread Reply:* The mobile version of AI2's EarthRanger (https://www.earthranger.com) can do all of this. In offline mode, it will record your location/photos/observations/notes in a local cache and once you have access to an internet connection, the app will transmit your data to a main server. It's easy to retrieve the data afterwards through their Python API, and the app is highly customizable. We're currently using ER Mobile for field census in a 100% offline setting in the Canadian High-Arctic (Nunavut). If you are interested, let me know and I can connect you the right people/share how we've adapt our EarthRanger server to our needs

earthranger.com
Victor Anton (victor@wildlife.ai)
2023-03-31 11:27:41

*Thread Reply:* The CitSci mobile app (https://citsci.org/apps) might be worth having a look at? I can't find "how much" open source is though...

Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-03-31 12:17:19

*Thread Reply:* Beat officers in India use NoteCam to capture the lat/lon locations during patrolling activity. As well as to take images of waterholes, pugmarks, PIDs etc. you can attach a note to that image which can then be exported as a KML file. But this is a freemium tool.

Ephantus Kanyugi (ndungu.kanyugi@gmail.com)
2023-03-31 15:51:47

Hello, my name is Ephantus Kanyugi. I am a data management and annotation expert. In the past years I have been involved in several scientific projects on facial and body language analysis of animals, where I have been in charge of complex data management pipelines. I am also very passionate about conservation of wildlife in my home country Kenya, and have a network of connections to conservation organizations in some nature reserves. I was excited to find out about this group, where I can combine my professional interest with my passion for wilflife conservation. I would like to network, and offer my services in all tasks related to data annotation, labeling, analysis, etc. Here's a link to my LinkedIn: https://www.linkedin.com/in/ephantus-kanyugi-9a04b7137/

😎 Jason Holmberg (Wild Me), Ephantus Kanyugi, Felipe Parodi, Sara Beery, Yseult Hb
🎉 Jason Holmberg (Wild Me), Pen-Yuan Hsing, Stephanie O'Donnell
👏 Jon Van Oast, Anna Zamansky, Dan Morris, Josh Seltzer, Thomas Radinger
👋 Carly Batist, Mark Goldwater, Viktor Domazetoski, Declan
Ed Miller (ed@hypraptive.com)
2023-04-06 13:57:38

Join me tomorrow for an https://www.linkedin.com/feed/hashtag/?keywords=aws&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7049801509106040833|#AWS Heroes in 15 session on Bear Conservation with ML, Serverless and Citizen Science.: https://www.linkedin.com/video/event/urn:li:ugcPost:7038608935079043073/

linkedin.com
linkedin.com
🌟 Talia Speaker, Jason Holmberg (Wild Me), Aakash Gupta
👍 Alayna Van Dervort
Ed Miller (ed@hypraptive.com)
2023-04-06 13:59:47

*Thread Reply:* There's also an interview by AWS Developer Advocate, Linda Haviv, from AWS re:Invent 2022: https://youtu.be/sKGau7c53go

YouTube
} Build On AWS (https://www.youtube.com/@BuildOnAWS)
❤️ Talia Speaker, Aakash Gupta
Silvia Zuffi (silvia@mi.imati.cnr.it)
2023-04-07 06:13:15

Hello people, does anybody know of image datasets with segmented plants and their species labels? Thanks!

:squirrel: Andrew Schulz
Jose Ruiz-Munoz (jfruizmu@unal.edu.co)
2023-04-07 12:05:06

*Thread Reply:* Maybe it is not exactly what you are looking for but take a look at https://www.kaggle.com/datasets/emmarex/plantdisease

kaggle.com
Jon Van Oast (jon@wildme.org)
2023-04-07 12:20:49

*Thread Reply:* plantnet is a great app/group. maybe this is useful? https://github.com/plantnet/PlantNet-300K

Website
<https://doi.org/10.5281/zenodo.5645731>
Stars
54
Dan Morris (agentmorris@gmail.com)
2023-04-07 15:04:56

*Thread Reply:* Segmented trees: https://github.com/norlab-ulaval/PercepTreeV1

Stars
41
Language
Python
Dan Morris (agentmorris@gmail.com)
2023-04-07 15:09:16

*Thread Reply:* More generally, most of these don't have segmentation labels, but... a list of image datasets that have something to do with plants:

https://lila.science/otherdatasets#images-plants

LILA BC
Written by
lilawp
Est. reading time
20 minutes
Silvia Zuffi (silvia@mi.imati.cnr.it)
2023-04-08 07:09:15

*Thread Reply:* Thank you all for the pointers!

👍 Jon Van Oast
Christoph Praschl (christoph.praschl@fh-hagenberg.at)
2023-04-25 16:17:27

*Thread Reply:* Hey Silvia, we have collected quite a huge dataset containing alpine plants from the Nationalpark Hohe Tauern (Austria). It is currently a closed dataset, but our project partner was interested in open source it. I could make contact, if you are interested in that :)

Some side information on the dataset can be found in our publications: https://doi.org/10.5220/0011607100003417 and https://doi.org/10.46354/i3m.2022.sesde.006

scitepress.org
cal-tek.eu
Silvia Zuffi (silvia@mi.imati.cnr.it)
2023-04-27 16:29:45

*Thread Reply:* Thank you very much! Let me talk with my collaborators, I am not sure these alpine plants are the species we need to consider.

Alayna Van Dervort (av@thebigwild.com)
2023-04-12 15:08:43

HI Everyone, Is anyone familiar with bear Den detection other than infrared? also looking for large drones used in species detection

Dimitri Korsch (korschdima@gmail.com)
2023-04-17 16:16:07

*Thread Reply:* Recently, I came across the BAMBI project. They aim at species monitoring using UAVs. Unfortunately, the website is in German because it is an Austrian project, but maybe you can contact the project leads.

Christoph Praschl (christoph.praschl@fh-hagenberg.at)
2023-04-25 15:08:40

*Thread Reply:* Hi Alayna,

we are using DJI M30T drones for animal detection in our project BAMBI as mentioned by Dimitri Korsch. So we can definitely talk! :)

James Farrell (jamespfarrell@gmail.com)
2023-04-17 11:11:03

Hi everyone, I have just joined, I’m interested in scaling climate finance with AI.

I have some ideas, and researching plausibility thereof while trying to catch up on what everyone is working on :)

My background is software engineering, and I’ve been residing mostly in the public blockchain space the last couple of years, where I’m co-founded Toucan.earth and KlimaDAO.finance.

The AI side is relatively new for me, so I appreciate any recommendations or exciting projects that are doing good work in the space, or any AI / Machine learning people looking to collaborate.

👋 Jaroslav Bezdek, Dimitri Korsch, Aakash Gupta, Ephantus Kanyugi, Louis Moreau, Catherine
💯 Aakash Gupta, Ruiz Rivera
Ruiz Rivera (ruiz.rivera93@gmail.com)
2023-07-10 12:14:45

*Thread Reply:* Hi @James Farrell, nice to see you on here I've also done some work for Klima and I'm familiar with the work you guys have done with Toucan in tokenizing carbon credits!

I was just curious what were the types of AI/Conservation projects you were hoping to undertake that applies to your work in ReFi? Maybe there's an opportunity for collaboration with myself and the other folks in this space 👍

Aidan Dunlop (aidan.dunlop@sky.uk)
2023-04-19 10:15:24

Hi everyone! and thanks @Stephanie O'Donnell for the invite :) I’m a software engineer working on MLOps, and currently studying a part-time master’s in AI Ethics & Society. I’m interested in how we can use AI in a responsible and transparent way.

I have a small favour to ask. For my dissertation, I am looking into the challenges AI/ML practitioners face when using open-source tools for explainable machine learning.

I am looking for participants for a (max) 10-minute anonymous survey. Your help would be much appreciated!! You can access the survey here: https://cambridge.eu.qualtrics.com/jfe/form/SV_9AKEzvsIjIZ1LXU.

If you could share the survey with other AI/ML practitioners that would be very helpful :)

Thanks ☺️

cambridge.eu.qualtrics.com
🤩 Carly Batist, Sara Beery
👋 Sara Beery, Ephantus Kanyugi
❤️ Stephanie O'Donnell, Talia Speaker, Kristina Kupferschmidt
Maciej Adamiak (adamiak.maciek@gmail.com)
2023-04-19 10:36:34

*Thread Reply:* One of my colleagues made an extensive research on XAI libraries. You can contact him. Maybe he could also help you in your task. Here is his blogpost with all his findings: https://www.reasonfieldlab.com/post/a-complete-guide-on-computer-vision-xai-libraries

reasonfieldlab.com
Aidan Dunlop (aidan.dunlop@sky.uk)
2023-04-19 11:44:43

*Thread Reply:* thanks @Maciej Adamiak, will do! 🙂

Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-04-24 23:47:06

*Thread Reply:* I hope the data from the survey (anonymized) will be shared with the participants?

Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-04-24 23:52:56

*Thread Reply:* Also you should add "I have not heard of this tool" as an option 🙂

Aidan Dunlop (aidan.dunlop@sky.uk)
2023-04-25 09:51:14

*Thread Reply:* @Aakash Gupta I’m happy to share the anonymised data 🙂

Aidan Dunlop (aidan.dunlop@sky.uk)
2023-04-25 09:52:25

*Thread Reply:* fair point, the idea was that you don’t have to click on each of the rows, so by not selecting an answer for that row it’s implied that the participant hasn’t heard of the tool

Scott Hosking (jshosking@gmail.com)
2023-04-26 04:14:30

📢 Register for the x @eds_book Reproducibility Challenge by midnight AoE this Friday 28 April!

👉 Learn more and register at bit.ly/ci-2023-rc-eds!

🐦 Like, RT, and tag others on Twitter to help us promote!

https://twitter.com/Climformatics/status/1651007755270905856

Twitter
Twitter
Twitter
👍 Oisin Mac Aodha, Sara Beery, Shiva Muruganandham, Dimitri Korsch, Arron Watson
Nicky Nicolson (n.nicolson@kew.org)
2023-05-09 12:14:05

Hello! I guess a lot of people here are (or work with) research software engineers, so I wanted to share the start of the 2023 Diverse RSE talks (Supporting Equity, Diversity and Inclusion within the Research Software Engineering community). Next week (May 16th) there is a talk from Dave Horsfall who is using his Software Sustainability Institute fellowship to advocate for better mental health among RSEs: https://diverse-rse.github.io/events/2023-05-16

software.ac.uk
DiveRSE - Supporting EDI within the RSE community
❤️ Sara Beery, Timm Haucke, Chris Llorca, Nora Gourmelon
Cara Appel (appelc@oregonstate.edu)
2023-05-09 16:44:50

Hello, does anyone have suggestions for appropriate preprint servers and/or journals for publishing a description of a software tool related to annotation and model training? Thanks in advance!

Michael Yair (m1cha3l.ya1r@gmail.com)
2023-05-10 01:57:53

*Thread Reply:* maybe - https://zenodo.org/

Christoph Praschl (christoph.praschl@fh-hagenberg.at)
2023-05-10 02:50:40

*Thread Reply:* Depending on the scope of your software maybe the MDPI Software Journal or Elseviers SoftwareX. Both are journals intended for Tool-related publications 🙂 Additionaly, maybe also a Nature Scientific Report. But all three are still scientific, so a pure software documentation won‘t workout 😄

Peter van Lunteren (contact@pvanlunteren.com)
2023-05-10 04:58:52

*Thread Reply:* If the software is open source: https://joss.theoj.org

joss.theoj.org
ISSN
2475-9066
😎 Jason Holmberg (Wild Me)
Cara Appel (appelc@oregonstate.edu)
2023-05-10 14:00:22

*Thread Reply:* thank you! yes, it is open source

Gracie Ermi (gracieermiifthen@gmail.com)
2023-05-10 12:59:42

Hi everyone! Just wanted to plug that @Carly Batist and I released another update to the Conservation Tech Directory! We are up to 836 resources in the directory, so if you haven't checked it out yet (or in a while), head on over! As always, if you know of any additional conservation tech resources that aren't in the directory yet, you can fill out our google form and we'll get them added. And if you aren't subscribed to our email alerts, head over to the website so you can be notified any time we release an update! AND we've now reached site visitors from 100 countries!! 🎉

conservationtech.directory
🎉 Carly Batist, Talia Speaker, Pen-Yuan Hsing, Clare Price, Sara Beery, Shir Bar, Timm Haucke, Andy Viet Huynh, Dan Morris, Viktor Domazetoski, Jason Holmberg (Wild Me), Diego Calanzone, Andrzej Białaś, Ruiz Rivera
🔥 Rowan Converse, Andy Viet Huynh, Jason Holmberg (Wild Me), Andrzej Białaś
Clare Price (theclareprice@gmail.com)
2023-05-10 15:56:57

*Thread Reply:* Oh very cool! I work for a professor at UBC on her “Smart Earth Project,” where we are doing a similar compilation and database production using web scrapes; would love to connect with you and chat more about what you’re up to if you ever have some spare time.

👍 Carly Batist
🙌 Carly Batist
Ruiz Rivera (ruiz.rivera93@gmail.com)
2023-07-10 12:09:53

*Thread Reply:* Thanks for sharing, this would be an awesome resource!

Ayan Mukhopadhyay (ayanmukg@gmail.com)
2023-05-15 12:50:53

Hi everyone, we are organizing the data science for social good workshop at KDD this year! Check it out and submit some cool papers 🙂

https://kdd-dssg.github.io/

kdd-dssg.github.io
👍 Oisin Mac Aodha, Sara Beery, Ankita Shukla, Maia Adar, Yseult Hb, Jaanak
👍:skin_tone_4: Chris Llorca
Katie Zacarian (katie@earthspecies.org)
2023-05-17 18:00:07

We're looking for a seasoned Research Director to join our growing Earth Species Project team and lead our long-term ML research agenda into the communication systems of other species. Please share with your networks! https://lnkd.in/gQpHizXJ

earthspecies.org
😍 Sara Beery, Jon Van Oast, Stefan Schneider, Jason Holmberg (Wild Me), Lucia Gordon, Juliana Gomez Consuegra
🎉 Jon Van Oast, Dan Morris, Jason Holmberg (Wild Me)
Tarun (tarunsharma.pes@gmail.com)
2023-05-19 14:38:16

Hi everyone, I'm curious about people's experiences using vision transformers such as ViT or transformer based detection models such as DETR, DINO, grounding DINO. I'm curious whether people have found these models to outperform CNN based classification/detection models by a big margin and how straightforward or cumbersome it was to fine tune these models in practice.

👀 Ben Williams, Rebecca Wilks
Josh Veitch-Michaelis (j.veitchmichaelis@gmail.com)
2023-05-22 15:28:36

*Thread Reply:* My experience with one recently (MetaFormer) is that they work. I don't know about "big margin" because frankly we didn't test on other architectures and we wanted something quick and dirty. In this case we picked the architecture that claimed SOTA on some tasks and tried it (and it certainly works well, even with 5k classes). To a great extent it depends on dataset quality and the usual how unbiased your evaluation is (e.g. checking that your test data are balanced and one easy class isn't skewing your results)

Did it work? Yes. Was it a pain because of hardcoded choices the researchers made in their repo? Yes 😅

A strong recommendation is to use HuggingFace's model zoo, they have "reference" implementations of a lot of this stuff and I think it's probably easier to use that than hack somebody's conference submission that only works on benchmark data. Inputs and outputs are well defined and the code is mostly well-tested. They also provide training examples for most of their models here https://github.com/NielsRogge/Transformers-Tutorials/tree/master

I think another interesting avenue for ecology that is under-explored is better pre-trained models from e.g. drone observations, camera traps, etc. DINO2 had some pretty good boosts for satellite analysis that way.

👍 Rebecca Wilks
Ben Weinstein (benweinstein2010@gmail.com)
2023-05-19 19:38:35

I’m working on a talk for the day-to-day responsibilities of a machine learning researcher in biology. I’m looking for literature, testimonials, thoughtful pieces on model development and iteration. A lot of what’s out there is just fluffy 1000 word stuff on medium. Ideas? I found this helpful.

❤️ Sara Beery
Michael Yair (m1cha3l.ya1r@gmail.com)
2023-05-20 06:02:59

*Thread Reply:* I can share my specific experience with agent base model I developed a while ago. there is the ODD protocol developed a specially for such model because of the complexity to explain the agent environment interaction. I also worked on a pipeline for protein comparison, which i didn't find any protocol for (back at that days), but for sure - there where no similarities between the processes, well aside the IDE and the Python environment 🙂.

for further details about ODD: https://www.jasss.org/23/2/7.html

Journal of Artificial Societies and Social Simulation
Caleb Robinson (calebrob6@gmail.com)
2023-05-21 19:21:38

*Thread Reply:* Hey Ben, I found this post by Karpathy extremely helpful -- http://karpathy.github.io/2019/04/25/recipe/

karpathy.github.io
John Martinsson (john.martinsson@ri.se)
2023-05-22 03:14:42

*Thread Reply:* The paper "Perspectives in machine learning for wildlife conservation" by Tuia et al., 2022, may be helpful: https://www.nature.com/articles/s41467-022-27980-y.

Nature
Josh Veitch-Michaelis (j.veitchmichaelis@gmail.com)
2023-05-22 15:22:47

*Thread Reply:* Google has a fairly technical doc about this (on top of Karpathy's essays which are usually great - as well as his Zero to Hero series)

https://github.com/google-research/tuning_playbook

Stars
19672
Last updated
10 minutes ago
👍 Ben Weinstein, Caleb Robinson, Sara Beery
Sara Beery (sbeery@caltech.edu)
2023-05-30 14:26:39

https://www.nature.com/articles/s41467-023-38901-y

Nature
❤️ Stefan Schneider, Hamed Alemohammad, Jon Van Oast, Suzanne Stathatos, Declan, Yseult Hb, Alex Brace, Elie Alhajjar, Taiki Sakai - NOAA Affiliate, Malte Pedersen, Carly Batist, Timm Haucke, Andrew Schulz, Clare Price, Devis Tuia, Yongjun Song, Caterina Barrasso, Caleb Robinson, Chuck Stewart, lin xiong, Pen-Yuan Hsing, Kurran Singh, Marcus Lapeyrolerie, Lucia Gordon, Omiros Pantazis, Magali Frauendorf, Felipe Montealegre-Mora, Juliana Gomez Consuegra, Rebecca Wilks, Fadel, Rajiv Pattni, Olof Mogren, Luke Sheneman, Eric Colson
📡 Stefan Schneider, Jon Van Oast, Elie Alhajjar, Shir Bar, Rowan Converse, Malte Pedersen, Carly Batist, Timm Haucke, Caleb Robinson, Casey Youngflesh, Brandon Hays
🤩 Santiago Ruiz Guzman
Fadel (fadel.seydou@gmail.com)
2023-06-06 04:17:29

*Thread Reply:* @Paul Allin Hey, check this out

Paul Allin (allinpaul@gmail.com)
2023-06-10 02:20:46

*Thread Reply:* Pretty amazing

Elizabeth Campolongo (e.campolongo479@gmail.com)
2023-06-01 17:51:12

Hi Everyone! We’d love for you to join us Aug. 14-17 for Image Datapalooza: Call for Participation. This exciting, participant-driven event will bring together an interdisciplinary group interested in using AI/ML to extract scientific knowledge from image and video data. We anticipate seeing participants ranging from AI/ML researchers, data scientists, domain scientists, and data curators, all the way to tool developers, metadata researchers, and knowledge engineers. Participants will self-organize into small groups to work hands-on and collaboratively with self-selected targets and outcomes towards the motivations and goals of the event.

To apply to participate, please fill out the Image Datapalooza 2023 Application for Participation by the end of June 12, 2023. Funds to assist with travel expenses are available but limited, as is space.

😍 Sara Beery, Andy Viet Huynh, Jaroslav Bezdek, Diego Calanzone, Rita Pucci
😎 Jon Van Oast, Timm Haucke
🙌 Jenna Kline
Elizabeth Campolongo (e.campolongo479@gmail.com)
2023-06-12 14:53:32

*Thread Reply:* Reminder: Today is the last day to apply for Image Datapalooza!

Dan Stowell (dan.stowell@naturalis.nl)
2023-06-02 06:07:16

Job at UCL (London, UK): "Lecturer/Associate Professor of Ecology and Innovative Technologies" http://tinyurl.com/bzed2kxy Deadline this Monday!

👍 Oisin Mac Aodha, Omiros Pantazis, Andrew Schulz, Yseult Hb, Jinsu Elhance
👍:skin_tone_3: Pen-Yuan Hsing
🙌 Omiros Pantazis, Jinsu Elhance
Rajiv Pattni (rajivcpattni@gmail.com)
2023-06-07 04:56:08

Hello all! Great to be here :) and thanks for adding me @Sara Beery

I'm a cofounder of Terraspect (before that I was an environmental engineer). We're trying to mitigate terrestrial ecosystem services impact from buildings during their operational life by identifying chemical, light, noise and heat mitigants.

Does anyone know of any datasets on embodied ecological impacts of building stock and causal drivers ideally in the urban global south? (just no more great crested newt data please! 🦎🙂)

Thanks!

🦎 Maia Adar, Sara Beery
Ramya (rmalu001@ucr.edu)
2023-06-07 12:21:54

https://edzt.fa.em4.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX/job/14568

Candidate Experience site
🙌 Maia Adar
Chinmay Talegaonkar (ctalegaonkar@ucsd.edu)
2023-06-08 21:53:21

Hi, can anyone point me event based datasets collected in the wild? For different species of animals ?

Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-06-08 22:14:17

*Thread Reply:* Have you checked the LILA dataset.

https://lila.science/datasets

👍 Taiki Sakai - NOAA Affiliate
Dan Stowell (dan.stowell@naturalis.nl)
2023-06-09 11:45:58

Hi all. Job offer! We're now recruiting for a Project Manager, for our new EU Doctoral Network project "Bioacoustic AI": https://www.naturalis.nl/over-ons/project-manager-for-eu-doctoral-network-project-bioacoustic-ai Please do share this. Happy to answer any questions. Thanks!

🔈 Oisin Mac Aodha, Stephanie O'Donnell, Sara Beery, Timm Haucke, Yseult Hb, Omiros Pantazis
❤️ Carly Batist, Sara Beery, Timm Haucke, Jon Van Oast
Evangeline Corcoran (ecorcoran@turing.ac.uk)
2023-06-12 07:23:42

Hi all, I'm guest editing a special issue of Remote Sensing on remote sensing applications in biodiversity conservation and ecological modelling, which is accepting submissions until October 9th 2023. If you are interesting in contributing please feel free to message me directly or check out the special issue link (https://www.mdpi.com/journal/remotesensing/special_issues/8OQ6QPGDQ2). Thanks! 📡🌏

 
📡 Ronan Wallace, Jon Van Oast, Sara Beery, Declan, Andy Viet Huynh, Katelyn Morrison, Sepand Dyanatkar
❤️ Ronan Wallace, Andy Viet Huynh
👀 Heather
Jon Van Oast (jon@wildme.org)
2023-06-12 19:43:45

📆 in case you are attending CVPR2023, i have created a channel #cvpr2023 so that we may coordinate and discuss items related to the conference, such as informal meetups, exchanging ideas, etc. (i did not see something like this already on this slack. so please let me know if i missed it.)

🙌 Suzanne Stathatos, Sara Beery, Katelyn Morrison, Anastasia Pagán
Sara Beery (sbeery@caltech.edu)
2023-06-12 19:46:13

*Thread Reply:* Also, if you aren't attending CVPR but you are in the Vancouver, BC area, feel free to join in and meet up as well!

💕 Jon Van Oast
👍 Thor Veen
Sara Beery (sbeery@caltech.edu)
2023-06-12 19:46:38

*Thread Reply:* Also crowdsourcing locations for an informal happy hour that is good for groups!

Aniruddha Saha (anisaha1@umd.edu)
2023-06-15 09:21:46

*Thread Reply:* Visited these places with large groups last time I was in Vancouver a few years ago

  1. https://goo.gl/maps/upccLW9RKWe1jVQC6
  2. https://goo.gl/maps/nN9UZbU7TJvngNmk9
  3. https://goo.gl/maps/Hv8wuJcTQ8ut2c3GA Earls was great. That would be my top recommendation. But things might have changed quite a bit over the years.
google.com
google.com
google.com
💕 Jon Van Oast
Drea Burbank (drea@savimbo.com)
2023-06-18 17:34:15

Savimbo just released an indicator-species biodiversity crediting methodology for review that allows indigenous groups to sell direct to climate markets, by calculating their environmental credits on Google Earth Engine. We are looking for developers to help us review and open-source this code. Please dm me if you want to be a technical reviewer. Its gone to Cercarbono, Verra, and Plan Vivo for certifying body adoption.

Savimbo
👍 Ted Schmitt, Griffin Flannery, Heather
George Darrah (george.darrah@systemiq.earth)
2023-06-19 04:50:15

Have been thinking/doing/investing at the intersection of biology and AI - here's a synthesis of the story so far: https://www.linkedin.com/posts/georgedarrahthe-key-to-natures-intelligence-is-artificial-activity-7075478440811884544-IFDm?utmsource=share&utmmedium=memberdesktop|https://www.linkedin.com/posts/georgedarrahthe-key-to-natures-intelligence-is-artifi[…]440811884544-IFDm?utmsource=share&utmmedium=memberdesktop

@Aamir Ahmad thanks again for inspiration re the zebras!

linkedin.com
👀 Stephanie O'Donnell, Jason Holmberg (Wild Me)
👍 Talia Speaker
❤️ Katie Zacarian
Levi Farrand (levi.farrand@gmail.com)
2023-06-20 07:22:14

Hi all. I am new to the group and wanted to introduce our start-up company - Deep Forestry.

We have built and autonomous drone that can autonomously fly into forests, in-between the trees, to create digital twins of entire ecosystems in a cost-effective and scalable way, at the push of a button with no drone piloting expertise required. We use three dimensional deep learning algorithms to segment and classify different objects in the forest and have a strong team of robotic developers and AI experts that are always looking for new ways to adapt our existing systems for the benefit of conservational goals. We are always looking for new pilot studies to test and improve our systems on different tasks.

If you need access to robotics and AI expertise, autonomous lightweight (or heavy lift) drones, or expansive digital twins of forest ecosystems at the push of a button, perhaps we could be a good match for collaboration. Note that we're also staring to work with hyperspectral data too if that is of interest. We are always happy to add our expertise to collaborative research projects. Note also that our first autonomous drone products are now commercially available too, so we can also provide you with a quick and easy way to conduct large-scale ecosystem surveying in projects that you may already have funded.

                               Email me (CEO) anytime to connect - <a href="mailto:Levi@deepforestry.com">Levi@deepforestry.com</a>
  1. Company introduction - https://youtu.be/xSe4M_zk4oQ
  2. Cloud services - https://youtu.be/796pBsJuNx8
  3. Autonomous robotics within the forest - https://youtu.be/G5wqgTNVwFI
👋 Omiros Pantazis, Elie Alhajjar, Viktor Domazetoski, Carly Batist, Yseult Hb, Ankita Shukla, Hannah Kim
👍 Yonghao Xu, lin xiong, Juliana Gomez Consuegra
👋:skin_tone_4: Chris Llorca
👍:skin_tone_4: Chris Llorca
Arvin Sun (sunbingyou1984@gmail.com)
2023-06-20 11:23:48

Hi guys,

Nice to meet you all! 👋 Thanks Sara having me here! 🫰 I’m the CEO & Founder of Traini. And a longtime champion of serial entrepreneur and innovation culture. I have been engaged in food delivery, e-commerce, new retail entrepreneurship and team management in China and the United States for more than 10 years.

1, Based in Palo Alto, before in LA. 2, Founder of Traini, An AI driven platform for dog trainers and parents. (Motion capture and PetGPT), and •US Partner - DayDayCook ( Pre-IPO NYSE ) •Mentor - Visionary Education Foundation. •Investor - Focus on growth-stage startups (A Round). 3, Immersed in Stanford every day, love to play basketball. My LinkedIn: http://linkedin.com/in/arvinsun

👋 Arvin Sun
Silvia Zuffi (silvia@mi.imati.cnr.it)
2023-06-20 16:47:10

Who wants to try getting a 3D dog model from an RGB image? Try out our demo here: https://huggingface.co/spaces/runa91/bite_gradio

huggingface.co
😍 Sara Beery, Justin Kay, Mitchell Rogers, Viktor Domazetoski, Andrew Schulz, Timm Haucke, Elie Alhajjar
🐶 Rowan Converse, Yseult Hb, Dan Morris, Oisin Mac Aodha, Timm Haucke, Elie Alhajjar, Rebecca Wilks, Arvin Sun
Devis Tuia (devis.tuia@epfl.ch)
2023-06-21 10:08:47

If someone is at cvpr and not in the #cvpr2023 channel, we have an informal meetup tonight at the Digital Orca (the killer whale pixelized statue) at 18h45 before going together to the reception! Everybody welcome!

👍 Oisin Mac Aodha, Silvia Zuffi, Sara Beery, Jason Holmberg (Wild Me), Anastasios Angelopoulos, Jon Van Oast
🐳 Oisin Mac Aodha, Matthias Zuerl, Katelyn Morrison, Anastasia Pagán, Sara Beery, Elizabeth Campolongo, Jason Holmberg (Wild Me), Anastasios Angelopoulos, Nico Lang
Giana Cirolia (giana@berkeley.edu)
2023-06-22 22:08:00

Hi Climate enthusiasts,

There is a VC that I know of that is rolling out a grant funding initiative, specifically focused on synthetic biology and climate.

The grants are fast grants (30 min application, 21 day turn around and up to 100k per project) with no IP taken and no reporting.

They are looking for BOTH applicants (project proposals) AND PIs to join their expert review panel.

Would anyone in this extended network be interested in being an expert reviewer on this climate initiative? Please forward widely. You can participate even if your general focus is simply climate!

Professors can both be reviewers and apply for the grants...AND many students per lab can apply.

Happy to directly connect any interested PI's with the program leads.

Warmly, Giana Cirolia

See below for full details:

As mentioned, Manifest Grants is a 'fast grants' program. We award $25K - $100K to scientists to prototype their most ambitious synbio ideas for solving the climate crisis. We're looking for senior reviewers for the June-July period.

Expectations: • Review ~10 applications. • Review time ~ 40mins per application. Fully numeric scoring system, no write-up required! • Get paid $30 per application! • Submit the review within 8 days. • Sign an NDA to protect the confidentiality of the proposed ideas. • You can apply even if you participate in the review! Interested? Here's my calendar if helpful.

P.S Our first iteration was with Repro Grants. We received 465 applications from all the top universities and funded 12 projects in female reproductive science. Read more (Forbes article). Sara

— Sara Kemppainen Fifty Years Linkedin / Twitter

Future
Written by
Patrick Collison, Tyler Cowen, and Patrick Hsu
Est. reading time
18 minutes
Forbes
👍 Jon Van Oast
Giana Cirolia (giana@berkeley.edu)
2023-06-23 13:19:48

*Thread Reply:* Thank you all to the initial interest! Please keep Direct messaging me :)

Heather (h_peacock@ducks.ca)
2023-07-19 16:56:40

*Thread Reply:* Hi @Giana Cirolia, can you provide a link for the funding applications/submissions? Are these grants only available to academics / in the US, or can NGOs apply for some projects? Thanks!

Vaughn Shirey (vmshirey@gmail.com)
2023-06-23 14:06:29

Hi all! I'm new to the group and just wanted to throw a quick intro on the channel.

I'm a current postdoctoral fellow at the University of Southern California looking at how we can use CV to rapidly mobilize historical biodiversity data in order to reconstruct past distributions, phenologies, and morphologies. The end goal is understanding how global change processes are impacting biodiversity on the planet and to translate these data and reconstructions to on-the-ground conservation organizations looking to conduct species' assessments and influence policy. I largely do this work with butterflies but am interested in all other groups as well (especially other invertebrates!).

Looking forward to being here and learning more! BTW, you can find me on Twitter, LinkedIn, or ResearchGate if you'd like to chat or feel free to shoot me a DM. :)

Twitter
👋 Jon Van Oast, Yseult Hb, Vincent Christlein, Shir Bar, Lucia Gordon
👋:skin_tone_4: Chris Llorca
Philippe Hermant (philippe.hermant@entropisme.com)
2023-06-23 19:17:32

Hello, being new here I’d like to know if a summary of impacts on biodiversity exists for AI ? As there are numerous topics, does such an overview exists ? And something specific to generative AI towards biodiversity ?

Carly Batist (cbatist@gradcenter.cuny.edu)
2023-06-23 20:11:07

*Thread Reply:* Not sure if I fully understand your ask, but this paper is a great overview of the ML space within biodiversity! https://www.nature.com/articles/s41467-022-27980-y

Nature
🤗 Blair Costelloe
Rajiv Pattni (rajivcpattni@gmail.com)
2023-06-25 07:05:21

*Thread Reply:* Likewise not sure if totally got the ask but this paper seems relevant too https://gpai.ai/projects/responsible-ai/environment/biodiversity-and-AI-opportunities-recommendations-for-action.pdf

Philippe Hermant (philippe.hermant@entropisme.com)
2023-07-07 08:39:16

*Thread Reply:* Thanks @Carly Batist @Rajiv Pattni recommendations are useful.

Sonny Burniston (sonnyburniston@yahoo.co.uk)
2023-06-24 09:34:32

Hi Everyone, Just thought I’d briefly introduce myself. I work as a software engineer at NatureMetrics primarily working on backend systems and data pipelines. I have a keen interest in ML engineering and AI in general. I hope to learn more from everyone in this community about how this can be applied to Biodiversity. If you have any interesting opportunities or would like to collaborate please reach out! (Particularly interested in any research opportunities available) Thanks! Sonny 🙂

NatureMetrics
👋 Carly Batist, Elie Alhajjar, Jason Holmberg (Wild Me), Jon Van Oast, Sara Beery, Dan Morris
🙂 Rajiv Pattni
🧬 Dan Morris
Jacob Marks (jamarks13@gmail.com)
2023-06-26 14:27:15

Hi everyone!

My name is Jacob and I'm an ML engineer and developer evangelist at an open source cv/ml company Voxel51. Before this, I completed my PhD in physics, and I'm really excited about applications of AI for climate and conservation. Thanks @Sara Beery for letting me join this awesome group!

Three Opportunities:

  1. I help run a virtual computer vision meetup with 4,500+ members. If anyone here would be interested in speaking at one of our events, please let me know!
  2. I write on Medium, and I'd love to highlight some of the incredible work this community is doing. If you are using CV in climate/conservation and want me to write about it, feel free to reach out 🙂
  3. The company I work for helps make datasets more accessible - we worked with authors of a bunch of CVPR datasets to make their datasets publicly browsable at cvpr.fiftyone.ai. If you want to make it easier for others to view and use your datasets, please reach out! Thanks so much, and so excited to connect with the amazing people here
Voxel51
Est. reading time
17 minutes
meetup.com
🙌 Peter van Lunteren, Ben Weinstein, Vincent Christlein, Katelyn Morrison, Victor Anton
👋 Peter van Lunteren, Aakash Gupta, Dimitri Korsch, Neil Kale
🙌:skin_tone_3: Pen-Yuan Hsing
Carly Batist (cbatist@gradcenter.cuny.edu)
2023-06-28 16:31:33

🚨RFCx/Arbimon is hiring!! Data science folks - Come work with me and an absolutely all-star group developing AI/ML techniques and models for acoustic monitoring & biodiversity conservation! https://storage.googleapis.com/rfcx-wordpress-media/2023/06/406f13e6-rfcx-data-scientist.pdf

🔊 Suzanne Stathatos, gvanhorn, Michael Yair, Katelyn Morrison, Jason Holmberg (Wild Me)
😎 Jon Van Oast, Ștefan Istrate, Katelyn Morrison, Maddie Cusimano, Jason Holmberg (Wild Me)
❤️ Talia Speaker, Matt Weldy, Yseult Hb, Jason Holmberg (Wild Me), Prabath Gunawardane, Nicolas Arrieta Larraza
👀 Taiki Sakai - NOAA Affiliate
Ditiro Rampate (ditirorampate@gmail.com)
2023-06-29 09:17:53

📢 Seeking your support to attend Deep Learning Indaba in Accra, Ghana! 🌟

Hi everyone! 👋

I wanted to reach out and ask for your support in attending the Deep Learning Indaba in Accra, Ghana. As an Omdena chapter lead in Botswana and a member of Sisonke Biotik, an AI and healthcare community in Africa, this conference presents a fantastic opportunity for me to enhance my AI skills and contribute to the development of AI in Africa.

The Deep Learning Indaba brings together some of the brightest minds in AI and offers a platform for learning and networking. By participating in this event, I'll have the chance to connect with experts and passionate individuals, accelerating our efforts to leverage AI advancements in Africa and improve access, quality, and equity in AI development.

I kindly ask for your support by considering a donation towards my journey. Every contribution, no matter the size, will bring us closer to unlocking the potential of AI for the betterment of Africa.

Please donate here: https://gogetfunding.com/help-ditiro-attend-deep-learning-indaba-in-accra/

Additionally, I would greatly appreciate it if you could share this message in your networks to help spread the word. Together, we can make a lasting impact and create a brighter future through the power of AI.

Thank you for your support and belief in the transformative power of AI. Let's seize this opportunity and drive positive change in Africa!

Best regards,

Ditiro Rampate

Omdena | Building AI Solutions for Real-World Problems
sisonkebiotik.africa
❤️ Helge Rhodin
Ditiro Rampate (ditirorampate@gmail.com)
2023-08-14 06:04:13

*Thread Reply:* Hello guys👋:skintone5:, I am still raising funds to attend the Deep Learning Indaba this September in Accra. I also have fundraising posts in twitter and LinkedIn and would appreciate if you could not only but share among your networks for better reach.

Deep Learning Indaba 2023
Est. reading time
2 minutes
Twitter
linkedin.com
👏 Jon Van Oast
Luisa Orsini (l.orsini@bham.ac.uk)
2023-07-01 09:39:52

hello everyone, my name is Luisa. My research helps develop new tools for biodiversity monitoring and forecasting. I work with industry and regulators to find practical ways to halt biodiversity loss. I am also the co-lead of the Alan Turing interest group Biodiversity monitoring and Forecasting: https://www.turing.ac.uk/research/interest-groups/biodiversity-monitoring-and-forecasting

🙌 FANQI Z, Carly Batist, Sonny Burniston, Felipe Parodi, Nazanin Rezaei, Katelyn Morrison, Elie Alhajjar, Andrew Schulz, Sara Beery, Anastasios Angelopoulos, Aida Mashkouri, Juliana Gomez Consuegra, Thor Veen
👋 Carly Batist, Ștefan Istrate, Sonny Burniston, Andre Telfer, Katelyn Morrison, Elie Alhajjar, Oisin Mac Aodha, Shir Bar, Sara Beery, Anastasios Angelopoulos, Cameron Trotter, Dimitri Korsch, Cathy Atkinson, Aakash Gupta, Heather
Marundu Muturi (marundu@appsilon.com)
2023-07-03 07:32:49

👋:skintone6: Hello, Good People.

I'm Marundu from Appsilon and I'm excited to share some Mbaza-related news: 📢 🗣️

There are two new Mbaza add-ons ready for you to use. Both are installable from Github and work out-of-the-box on the csv output from Mbaza:

  1. mbaza-sequencer: Group images (sequences) together to give a combined prediction ◦ Experiments find this improves classification accuracy by up to 5% In the example, the delay max_delay=5, meaning images will be sequenced together until the image-to-image delay is &gt; 5 seconds.

Why is it useful? The first prediction in the csv is of a hare 🐇 and you will see why when you see the first image! (shown near the end)

Combining the predictions corrects this to an aardvark 💪:skintone6:

  1. mbaza-mv-predicted: Move classified images to year / week / species folder

Both work on the .csv file output from Mbaza for Python 3.8+. The repositories also have installation and usage instructions, let us know if anything is unclear or missing in the documentation.

Attached is also a video demo using the two add-ons. A couple of things to note: • The first prediction from the Mbaza .csv file is a Hare, but the sequencer corrects this to an Aardvark after grouping predictions (images shown at end) • In the example used, an image-to-image delay to sequence images is used (< 5 seconds) but you can also specify a maximum number of images per sequence (see the documentation for all options) If you have any questions, let me know.

appsilon.com
🙌 Peter van Lunteren, Oisin Mac Aodha, Sara Beery
❤️ Anton Alvarez
Dan Watson (dan@sntech.co.uk)
2023-07-10 09:34:00

Hi everyone. Can't believe I'm only just joining this Slack but it's good to be here (thanks for the suggestion @Andrew Schulz!). Good to see some familiar faces! I've recently been looking into underwater computer vision and have a few questions to ask around applying it to different things. Would love to chat with folks working on that type of technology.

👋 Daniel Grzenda, Sara Beery, Andrew Schulz, Justin Kay, Cameron Trotter, Jon Van Oast, Katelyn Morrison
Malte Pedersen (mape@create.aau.dk)
2023-07-10 10:15:28

*Thread Reply:* Hi Dan, welcome! You can join the #marine channel for discussions regarding underwater computer vision 🐟 📷

👀 Dan Watson
🙌 Dan Watson
Antonio Ferraz (antonio.a.ferraz@jpl.nasa.gov)
2023-07-10 17:56:45

We are hiring a postdoc to study and prototype innovative satellite-tag communication systems to track the movement of animals from space.

The selected candidate will work in an interdisciplinary team, including experts on the NASA-JPL Mission Designs, leaders of active and passive Radio Frequency satellite communications and animal movement ecologists.

👍 G. Andrew Fricker
Antonio Ferraz (antonio.a.ferraz@jpl.nasa.gov)
2023-07-10 17:57:04

https://www.jpl.jobs/job/R4460/Postdoc-Spaceborne-Satellite-tag-RF-Systems-to-Measure-Spatiotemporal-Patterns-of-Wildlife

JPL (Jet Propulsion Laboratory)
🔥 Dan Watson, Sara Beery, Toryn Schafer, Ando Shah
Maddie Cusimano (maddie@earthspecies.org)
2023-07-11 01:13:13

curious if anyone will be at the animal behavior society conference this week in portland?

👋 Blair Costelloe
👍 Jon Van Oast
👏 Katie Zacarian
Blair Costelloe (blaircostelloe@gmail.com)
2023-07-11 10:15:01

*Thread Reply:* See you soon at the drone workshop, actually!

😄 Maddie Cusimano
Edward Bayes (bayesbayes@gmail.com)
2023-07-12 17:55:35

Hi everyone! I'm looking to fine-tune MegaDetector and a few other models for a tiger conservation project I'm working on with the Government of Bhutan and was wondering if anyone has come across any tiger-specific camera trap datasets or models? All I've been able to find is this dataset - https://lila.science/datasets/atrw - and this model - https://github.com/KupynOrest/AmurTigerCVWC - but they use images from zoos and I'm keen to use images from the wild if possible. Thanks so much! Ed

🐅 Sara Beery, Jason Holmberg (Wild Me), Ariel Chamberlain, Anastasios Angelopoulos, Shir Bar, Cameron Trotter
🐯 Sara Beery, Jon Van Oast, Jason Holmberg (Wild Me), Dan Morris, Anastasios Angelopoulos
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2023-07-12 17:57:21

*Thread Reply:* Just detection or re-ID too? We're working on multi-species big cat re-ID models and were going to include the lila.science dataset as well. Happy to collaborate. Feel free to DM directly.

🎉 Jon Van Oast, Edward Bayes
Edward Bayes (bayesbayes@gmail.com)
2023-07-12 18:00:17

*Thread Reply:* Hi Jason, thanks so much for coming back to me. Just detection initially but we'd like to explore re-id in the future! I'll DM you. Thanks so much! 😊

Dan Morris (agentmorris@gmail.com)
2023-07-12 18:46:58

*Thread Reply:* I can't help you with "tiger-specific datasets", but if I grab the giant .csv file that contains taxonomic information for all the camera trap data on LILA:

https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/

...and I grep for "panthera tigris", I get 321 hits. Not a lot, but that's something. If you download that big .csv file and run:

cat lilaimageurlsandlabels.csv | grep -i "panthera tigris" | cut -d "," -f 2

...you should get a flat list of URLs that's easy to download, e.g.:

https://lilablobssc.blob.core.windows.net/wcs-unzipped/animals/0462/1185.jpg

Tigers are an interesting case where basically all of the candidate data comes from areas that (for good reason) have historically been hesitant to allow data publication, even images, so data sparsity is a bit fundamental to your problem. Maybe 321 wild camera trap images plus the handful of zoo images is good enough to get you started... does the organization you're working with have any labeled images, and/or do they think they have tigers amongst a pile of unlabeled images?

Also when you say "fine-tune MD", confirm that you mean "add species (or tiger/non-tiger) classes to MD", as opposed to "make MD more accurate"? There are lots of situations where fine-tuning to make MD more accurate would be worth it, but my guess is that MD is probably doing asymptotically well on tigers: they're big, they're patterned, they look like lots of other species MD saw in training, and we did have a bunch of tiger data from the Wildlife Institute of India during training.

❤️ Edward Bayes, Aakash Gupta
Dan Morris (agentmorris@gmail.com)
2023-07-12 18:50:18

*Thread Reply:* And of course, there's always the time-tested strategy of begging Ecology Twitter for data.

😆 Edward Bayes
Shivam Shrotriya (shivam.shrotriya@gmail.com)
2023-07-13 02:58:29

*Thread Reply:* Hi Edward, Here is a fine scale model for MegaDetector (https://github.com/bhlab/SpSeg) that works really well in segregating tigers (>98% accuracy). This model was developed using datasets from Central Indian landscape, so it would work fine for Bhutan as well.

Unfortunately, training dataset isn't available for public use.

Stars
15
Language
Jupyter Notebook
❤️ Edward Bayes
Edward Bayes (bayesbayes@gmail.com)
2023-07-13 04:35:16

*Thread Reply:* Thanks so much Dan and Shivam!

@Dan Morris - really useful feedback, thanks so much! I’ll find those 321 images in the LILA dataset! And totally hear your point re: data scarcity. They do have labeled data, yes, but to exactly your point there are sensitivities in sharing such data, particularly outside the country, so before navigating that thorny issue, we wanted to see how performant we could get a model on publicly available data. Re: fine-tune, I mean classes yes, not improving accuracy (using your super help tutorial as a start https://www.kaggle.com/code/agentmorris/fine-tuning-megadetector)

@Shivam Shrotriya - thank you so much! It sounds like this might be exactly what I was looking for. Thanks again!

kaggle.com
Dan Morris (agentmorris@gmail.com)
2023-07-13 19:13:40

*Thread Reply:* Definitely let us know what you learn re: SpSeg, fine-tuning, etc.

The only minor thing I'll correct here - in the interest of giving credit where credit is due - is that the fine-tuning notebook you linked to isn't really mine, I just forked it temporarily to make minor copy edits that were pulled into the upstream version. The original version is here:

https://www.kaggle.com/code/evmans/train-megadetector-tutorial

It is a really nice tutorial, but I can't take credit for how nice it is. :)

❤️ Edward Bayes
Edward Bayes (bayesbayes@gmail.com)
2023-07-13 20:41:53

*Thread Reply:* shall do! thanks again!

Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-07-14 15:34:01

*Thread Reply:* So a suggestion for such long tailed classes. Is to run an image search on google/Bing or other search engines. It will give you a large set of images, which you can simply download and use for training.

👍 Edward Bayes
Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-07-14 15:37:17

*Thread Reply:* I workwith the Indian government for deploying an AI platform for processing and storage of CT images. A lot of of forest officers in India are on twitter. And they keep posting wildlife images. A search on twitter should give you some datapoints. DM me, if you would like to talk.

👍 Edward Bayes
Patrick Beukema (patrickb@allenai.org)
2023-07-14 18:40:22

We (Skylight, AI2) are hiring an MLE to help build maritime intelligence applications for ocean conservation: https://boards.greenhouse.io/thealleninstitute/jobs/5171288 #AI4Good

skylight.global
boards.greenhouse.io
🐟 Suzanne Stathatos, Omiros Pantazis, Alan Papalia, Kristina Kupferschmidt
😎 Jon Van Oast, Katelyn Morrison, Alan Papalia, Kristina Kupferschmidt, Dan Watson
❤️ Ben Williams, Aakash Gupta, Emilio Luz-Ricca
Patrick Beukema (patrickb@allenai.org)
2023-07-14 19:52:05

Does anyone here know what the best strategy is for generating precise geolocation of every pixel in an Sentinel-2 image? I worked with VIIRS data for a previous model and NASA produces latitude and longitude arrays for the raw imagery which enables very precise geolocations for object detection. Their method is described in this technical report: https://www.star.nesdis.noaa.gov/jpss/documents/ATBD/D0001-M01-S01-004_JPSS_ATBD_VIIRS-Geolocation_B.pdf Does anyone know whether it is possible to replicate that method with the Sentinel-2 data to obtain similar lat/lon arrays for each pixel?

Hamed Alemohammad (h.alemohammad@gmail.com)
2023-07-19 12:26:42

*Thread Reply:* Hi @Patrick Beukema, each S-2 scene has metadata in the GeoTIFF file which contains the coordinates of the upper left pixel of the image, and the resolution of each pixel. These two are enough to generate the coordinates of each pixel in the image. If you load the file using rioxarray in Python you already get the coordinates in the dataset array.

Patrick Beukema (patrickb@allenai.org)
2023-07-21 14:43:21

*Thread Reply:* hey thanks yeah I am familiar with that method, my goal is to build the most precise coordinates possible. Do you know what the precision of that method is and if its optimal?

Hamed Alemohammad (h.alemohammad@gmail.com)
2023-07-21 19:32:00

*Thread Reply:* Reading the scene metadata, as I suggested, you are basically assigning the coordinates to each pixel based on the orthorectification that the data provider did. rioxarray or any other package doesn't change the precision of the coordinates and they are as accurate as provided in the metadata.

If you are interested in correcting the alignment of scenes through time, aka mis-registration error, then the problem is something else. There are methods in the literature to mitigate this but I don't know the best ones.

Luke Sheneman (sheneman@uidaho.edu)
2023-07-17 20:50:17

While studying endangered Northern Idaho Columbian Ground Squirrels at the University of Idaho, we successfully deployed MegaDetector at the edge for near-realtime autonomous decision making workflows in the field. We're using a combination of specially instrumented enclosures, NVIDIA Jetson Nano compute, and various sensors and cameras.

🐿️ Sara Beery, Felipe Parodi, Dan Morris, Sowbaranika, Carly Batist, Vincent Christlein, Julius Roeder, Timm Haucke, Ștefan Istrate, Jason Holmberg (Wild Me), Shir Bar, Michael Bunsen, Emilio Luz-Ricca, Boyu Zhang, Edward Bayes, Rebecca Wilks
💚 Jon Van Oast, Jason Holmberg (Wild Me), Michael Bunsen
👀 Dante Wasmuht, Elizabeth Campolongo, Michael Bunsen, Oscar Schafer
👍 Olivier Gimenez, Dante Wasmuht, Maddie Cusimano, Michael Bunsen, Evan Eskew, Chris Yeh
Sara Beery (sbeery@caltech.edu)
2023-07-17 20:54:43

*Thread Reply:* This is SO COOL

Dan Morris (agentmorris@gmail.com)
2023-07-18 10:55:59

*Thread Reply:* Very cool indeed! Is there a two-sentence summary of what the automated decisions would be? E.g. are you capturing/tagging automatically for survey purposes?

Luke Sheneman (sheneman@uidaho.edu)
2023-07-18 12:08:42

*Thread Reply:* @Dan Morris Right now the system uses the real-time output of MegaDetector to decide whether or not there is an animal in the detection chamber, and if so it continues data collection until no identifiable animal is present. We are extending that to an automated mark/recapture mechanism with the eventual goal of precision-targeted vaccine delivery in the field, down to the individual level.

👍 Dan Morris, Sara Beery, Jason Holmberg (Wild Me), Shir Bar, Boyu Zhang
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2023-07-18 13:59:14

*Thread Reply:* Let me know if Wild Me (wildme.org) can be a future part of individual ID part.

👍 Sara Beery
Luke Sheneman (sheneman@uidaho.edu)
2023-07-18 14:30:47

*Thread Reply:* @Jason Holmberg (Wild Me) Definitely! Looking now at your website. Individual CV ID for small mammals like deer mice and ground squirrels is challenging. I'll be in touch.

👍 Sara Beery, Jason Holmberg (Wild Me)
Dan Stowell (dan.stowell@naturalis.nl)
2023-07-18 04:46:38

Our new Doctoral Network "Bioacoustic AI" now has a website! Funded PhDs in Europe coming soon. Find out about the project: https://bioacousticai.eu/

🙌 Inês Nolasco, Sonny Burniston, Nora Gourmelon, Omiros Pantazis, Georgia Atkinson, Viktor Domazetoski, Stephanie O'Donnell, Justin Kay, Carly Batist, Yseult Hb, Subhransu Maji, Emilio Luz-Ricca, Nicolas Arrieta Larraza, Thomas Radinger
😄 Maddie Cusimano
🙌:skin_tone_4: Chris Llorca
💕 Jon Van Oast
🦇 Marius Miron
❤️ Carly Batist, Ben Williams
👍 Eelke
Arvin Sun (sunbingyou1984@gmail.com)
2023-07-19 14:21:25

@channel A good opportunity to expose the product. Traini is looking for a number of professional service partners including pet medical, trainer, grooming, pet supplies and more.Recommend it to our users in our community by AI. We’re focusing on dogs right now. If you are DM or email to trainipet@gmail.com

:spam: Declan
Carly Batist (cbatist@gradcenter.cuny.edu)
2023-07-19 14:32:28

Anyone else going to be at ICCB next week in Kigali? 🙋‍♀️:skintone2: Let me and/or @Stephanie O'Donnell know, we’ve got a conservation tech crew started for it!

🙋 Jes Lefcourt
🎉 Jon Van Oast, Stephanie O'Donnell
Timothy Mayer (tjm0042@uah.edu)
2023-07-20 11:11:33

Hello Everyone, I wanted to introduce myself to the channel I am Tim Mayer, Research Scientist at University of Alabama in Huntsville and the Regional Science Coordination Lead for the SERVIR Hindu Kush Himalaya region as part of SERVIR global's NASA Science Coordination Office (SCO). SERVIR is a joint initiative of NASA and USAID and we are working around the globe with local partners to develop geospatial solutions and applications to address environmental challenges. I want to thank @Ben Weinstein for connecting me to this group!

Within NASA SCO our team has a dedicated TensorFlow Working Group (TFWG) which focuses on capacity building, application development, and knowledge sharing around Deep Learning approaches. I will be sure to share future talks and updates coming from our group here.

Again really excited to join this slack channel! Tim

sites.google.com
👋 Sara Beery, Elie Alhajjar, Avi Sundaresan, Alan Papalia, Carly Batist, Omiros Pantazis, Ben Weinstein, Jon Van Oast, Anastasios Angelopoulos, Dan Morris, Jason Holmberg (Wild Me), Andrew Schulz, Timm Haucke, Maia Adar, Jonathan Roberts, Declan, Lindsey Dukles, aruna, Nicolas Arrieta Larraza, Carl Boettiger
Patrick Beukema (patrickb@allenai.org)
2023-07-21 14:44:26

What do folks use for annotating geospatial data for computer vision (object detection)? commercial, bespoke, in house, etc? our needs are to label objects in Sentinel-1 or Sentinel-2 imagery. Scenes are large, we need to look through many channels often.

👀 Mikey Tabak, Carl Boettiger
Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-07-21 16:20:27

*Thread Reply:* Perhaps Smarter-labelme https://github.com/robot-perception-group/smarter-labelme can help? It is very useful for tracking objects over time for quick annotation, but you can also use it for an arbitrary set of images. Let me know if you find it useful and we (@Eric Price or I) can help you with any questions about that. The only issue could be multiple channels (other than RGB).

Stars
10
Language
Python
Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-07-21 16:21:36

*Thread Reply:* I recently posted about it on Linkedin -- https://www.linkedin.com/posts/ahmadaamir_robotics-labeling-annotate-activity-7086729440344322048-qAud?utm_source=share&utm_medium=member_desktop

linkedin.com
Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-07-21 16:28:19

*Thread Reply:* Here's a comprehensive list of annotation tools that you can use with their github links: https://www.thinkevolveconsulting.com/list-of-open-source-annotation-tools-for-machine-learning-research/

Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-07-21 16:29:55

*Thread Reply:* Have you tried using the segment-geospatial or segment-Anything-eo packages. I have tried it with drone imagery and it seems to give a good output. Maybe it could be trained for Sentinel images.

Patrick Beukema (patrickb@allenai.org)
2023-07-21 17:02:51

*Thread Reply:* Thanks so much! I don’t mind if it’s commercial, paid for product. But these are a great reference, thanks!

Roni Choudhury (roni.choudhury@kitware.com)
2023-07-21 18:15:45

*Thread Reply:* i work for kitware inc. and am a technical lead for DIVE, an open source video/image annotation platform: https://kitware.github.io/dive/

kitware.github.io
Roni Choudhury (roni.choudhury@kitware.com)
2023-07-21 18:15:59

*Thread Reply:* others in my team have expertise in geospatial data as well

Roni Choudhury (roni.choudhury@kitware.com)
2023-07-21 18:16:48

*Thread Reply:* our software is generally open source, and we partner with people to develop customizations needed for your specific workflow

Roni Choudhury (roni.choudhury@kitware.com)
2023-07-21 18:17:08

*Thread Reply:* if you're interested in exploring whether we can help you, @Patrick Beukema, please do reach out via DM

Dan Morris (agentmorris@gmail.com)
2023-07-23 13:19:58

*Thread Reply:* @Aakash Gupta Great list! One to add: https://github.com/mfl28/BoundingBoxEditor. It's fast and simple, and the developer has been very responsive. I'm trying to work out a workflow for doing a small number of offline annotations, and FWIW my "top two" right now are LabelMe (the offline one) and BoundingBoxEditor. But I wasn't aware of DIVE, definitely going to take a look at that too!

❤️ Roni Choudhury
👍 Aakash Gupta
Mikey Tabak (tabakma@gmail.com)
2023-07-27 07:52:17

*Thread Reply:* I use CVAT, which has a free online interface. I have not checked out these other tools, but I will be looking at them next time I need to annotate.

Matthias Zuerl (matthias.zuerl@fau.de)
2023-07-24 09:30:39

Hey! I am currently conducting a literature research on video-based re-ID methods and I am specifically interested in video-based re-ID datasets for any non-human species. My colleagues and I recently published such dataset for polar bears ( https://doi.org/10.3390/ani13050801 ). Tbh, I am not able to find anything similar. Did I miss something? Do you know any video-based re-ID dataset for animals?

(I know that e.g. in the Amur Tiger dataset ( https://doi.org/10.1145/3394171.3413569 ) one can find video clips of the animals, but the way it is annotated it's not usable "out-of-the-box" for video-based re-ID. This statement is true for some other datasets which contain videos. But I am looking for something which is designed in the same way as the human benchmark datasets, e.g. Mars, iLIDS-VID ... --> sequences as samples, including movement, sorted by ID)

Any help is greatly appreciated! 🙂

🐻‍❄️ Dan Morris
👍 Mitchell Rogers
Sara Beery (sbeery@caltech.edu)
2023-07-24 09:31:25

*Thread Reply:* @Peter Kulits @Chuck Stewart @Jason Holmberg (Wild Me)

🙏 Matthias Zuerl
🎉 Jon Van Oast
Peter Kulits (peterkulits@gmail.com)
2023-07-24 15:32:59

*Thread Reply:* Sure, people have been using video for animal re-id for a while, though unfortunately most of it isn't public. Here's a handful of papers that release video re-id datasets that I think have the information you're looking for: • Towards Self-Supervision for Video Identification of Individual Holstein-Friesian Cattle: The Cows2021 DatasetVisual Localisation and Individual Identification of Holstein Friesian Cattle via Deep LearningClassification and Re-Identification of Fruit Fly Individuals Across Days With Convolutional Neural NetworksRe-Identification of Zebrafish using Metric Learning • One public, one by request: Deep-learning based identification, tracking, pose estimation, and behavior classification of interacting primates and mice in complex environments

👍 Mitchell Rogers, Malte Pedersen, Than Hitt
Malte Pedersen (mape@create.aau.dk)
2023-07-25 03:22:38

*Thread Reply:* Tracking is basically frame-by-frame re-identification and you have the ID's in the ground truth files. So if you find a lack of variation in re-id video datasets of animals for benchmarking purposes maybe an option could be to use tracking sequences as a supplement? Just a quick idea. • 3D-ZeF - stereo tracking dataset with zebrafishBrackishMOT - tracking dataset from brackish waterTracking of ants

Neha Hulkund (nhulkund@mit.edu)
2023-07-24 09:40:45

Has anyone worked with or knows of tabular data for conservation? I'm hoping to look at different applications of this, but wasn't sure if it was widely used or not?

👀 Sara Beery
Carl Boettiger (cboettig@berkeley.edu)
2023-08-02 11:51:32

*Thread Reply:* yes, lots of conservation-related data is still distributed in tabular formats! e.g. the largest occurrence records, GBIF is distributed as parquet, & eBird is distributed as compressed csv. What kind of conservation data did you have in mind?

Pen-Yuan Hsing (penyuanhsing@posteo.is)
2023-07-25 12:12:59

Hello everyone,

With my original background in ecology/conservation, I'm working on an open science project to give motion-sensing wildlife camera traps binocular vision. The goal is for camera traps to capture spatial information along with wildlife photos. This data has the potential to greatly ease estimating wildlife populations.

I've been working with some engineers from the Gathering for Open Science Hardware, and one of them has devised a 3D-printed stereo lens attachment for camera traps, while not adding to the power envelope. We hope this will be easier than modifying camera traps or building them from scratch to provide 3D vision. I know some of those here are working on stereovision algorithms for wildlife observations, and being able to directly and efficiently obtain stereo images could be helpful.

To that end, we've started a crowdfunding campaign on Experiment.com, and thanks to generous contributors we're already at 92% of our funding goal but with only 9 days left!

I'm posting this here to request your support and spread the word. Any contribution, even just $1 will help! This project may also be of interest as demonstrating a different way of funding open science. If successful, our design of the stereo lens for camera traps and any data obtained will be published as open data and open source hardware.

Let me know if you have any questions/comments, and thank you in advance for you help!

Experiment - Moving Science Forward
👍 Peter van Lunteren, Timm Haucke, Aida Mashkouri
😎 Timm Haucke, Dan Morris, Andrew Schulz, Mikey Tabak
Dan Stowell (dan.stowell@naturalis.nl)
2023-07-28 11:12:58

Hi folks. Method request! Do you know any work that shows how to apply machine learning to GPS tracks data, e.g. collar tags? (Any species. I'd like to find some examples to use in a course. It has to involve machine learning...)

Sara Beery (sbeery@caltech.edu)
2023-07-28 11:14:26

*Thread Reply:* @Catherine Villeneuve

🙏 Dan Stowell
Oisin Mac Aodha (macaodha@caltech.edu)
2023-07-28 11:18:27
Dan Stowell (dan.stowell@naturalis.nl)
2023-07-28 11:35:28

*Thread Reply:* Oh I know that first-author! Nice example

Toryn Schafer (tschafer@tamu.edu)
2023-08-01 17:53:03

*Thread Reply:* I'd be curious to see if other examples pop up. I would say machine learning is not popular yet with movement ecologists. Here is another example though: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0235750

Also, if you consider MAXENT to be ML, it is frequently used with telemetry data to estimate resource selection.

journals.plos.org
Timothy Mayer (tjm0042@uah.edu)
2023-07-28 13:10:08

For all those planning to attend AGU 2023, and looking for an exciting session please checkout out and submit an abstract to GC018 - Applications of Machine Learning and Deep Learning to address environmental challenges and Sustainable Development Goals session. Abstracts due on 8/2 :)

AGU - AGU23
Michael Bunsen (notbot@gmail.com)
2023-07-28 14:39:13

For anyone who will be in Portland for the ESA 2023 conference, I invite you to our early-morning workshop on automated insect monitoring! https://esa2023.eventscribe.net/fsPopup.asp?PresentationID=1231858&Mode=presInfo

Also join the #esa2023happy_hour channel if you are interested in meeting up during the week.

😎 Jason Holmberg (Wild Me), Shir Bar, Carly Batist, Sara Beery
👍 Jon Van Oast, Timothy Mayer, Dan Morris, Maddie Cusimano
🤩 Carly Batist, Thomas Radinger
Dan Morris (agentmorris@gmail.com)
2023-07-31 10:34:55

New dataset on LILA, thanks to the amazing work being done at The Cacophony Project to manage invasive predators in New Zealand:

https://lila.science/datasets/new-zealand-wildlife-thermal-imaging/

This is LILA's first thermal camera trap dataset... I feel like even just a couple years ago, thermal cameras were a concept one might dream of deploying if you had $20,000 and a best friend in the CIA. Now they're becoming quite practical (in part thanks to The Cacophony Project's sibling organization, 2040), and thermal images open up a whole new interesting world of AI. As much as I like to throw MegaDetector at everything, detection is basically a non-issue with thermal sensors. Species classification is... different than it is with optical sensors. You have some information in the pixels, but at least as much information in the movement trajectory. So, for AI folks who want to try something that there aren't already 1000000 papers about, take a look at this dataset!

Example video:

https://storage.googleapis.com/public-datasets-lila/nz-thermal/videos/1486055.mp4

@Giampaolo Ferraro @matthew hellicar

👏 Malte Pedersen, Fagner Cunha, Dante Wasmuht, Cameron Trotter, Elie Alhajjar, Timm Haucke, Jason Holmberg (Wild Me), Alan Papalia, Juliana Gomez Consuegra, Aakash Gupta, Felipe Montealegre-Mora, Sara Beery, David Will
👍 Justin Kay, Valentin Gabeff, Benjamin Kellenberger, Elie Alhajjar, Jonathan Roberts, Timm Haucke, Peter van Lunteren, Jason Holmberg (Wild Me), Matthias Zuerl, Vincent Christlein, Thor Veen, Paul Melki
🙌 Carly Batist, Elie Alhajjar, Timm Haucke, Jason Holmberg (Wild Me)
🔥 Oisin Mac Aodha, Elie Alhajjar, Elizabeth Campolongo, Shir Bar, Timm Haucke, Rowan Converse, Peter van Lunteren, Jason Holmberg (Wild Me), Taiki Sakai - NOAA Affiliate, Ghazi Randhawa, Maddie Cusimano, Carl Boettiger, Josh Veitch-Michaelis
😎 Jon Van Oast, Jason Holmberg (Wild Me), Michael Bunsen
:flag_nz: Mitchell Rogers
👏:skin_tone_3: Pen-Yuan Hsing
Timothy Keitt (tkeitt@utexas.edu)
2023-07-31 18:11:26

Hi Folks, I am Timothy Keitt from the University of Texas at Austin. I just joined the slack. We're interested in automated monitoring of biodiversity. I'll be at ESA (Mon-Wed) if anyone wants to chat in person. All the best.

sites.cns.utexas.edu
👋 Sara Beery
Jon Van Oast (jon@wildme.org)
2023-07-31 18:33:21

*Thread Reply:* see also #esa2023happy_hour which is on tuesday, so hope you can make it.

👍 Timothy Keitt
Jacob Marks (jamarks13@gmail.com)
2023-08-02 14:29:48

Hey everyone!

My name is Jacob and I'm an ML Engineer and Developer Evangelist at Voxel51.

I run an industry spotlight blog series where I focus on how computer vision is being utilized in different industries. So far I have done Agriculture, Manufacturing, and Healthcare is coming out in 2 weeks).

Next up on my list is Climate and Conservation!!

Would love to chat with anyone working in the space, both in academia and industry. If you are open to speaking about your work, feel free to reach out to me either here on Slack, via email at jacob@voxel51.com, or on LinkedIn — would love to learn about what you're working on!!

Best, Jacob

😎 Jon Van Oast, Jason Holmberg (Wild Me), Sara Beery, Aakash Gupta, Nora Gourmelon, Amara McCune
Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-08-03 07:12:58

Very happy to announce our latest work in the direction of inferring animal behavior automatically from drone videos. Not only this, we also release a never-seen-before kind of dataset of zebras (both plains and Grévy's) recorded in their natural environment in Kenya using two synchronized drones.

Code: https://github.com/robot-perception-group/animal-behaviour-inference Paper: https://www.biorxiv.org/content/10.1101/2023.07.31.551177v1 Dataset: https://keeper.mpdl.mpg.de/d/a9822e000aff4b5391e1/ Video: https://youtu.be/Zu-t0JJsz5o

Feel free to reach out to me about it.

bioRxiv
KEEPER
YouTube
} Aamir Ahmad (https://www.youtube.com/@KgpianAsimov)
Language
Python
Last updated
6 days ago
🙌 Jonathan Roberts, Ștefan Istrate, Matthias Zuerl, Sara Beery, Shir Bar, Enis Berk Çoban, Felipe Parodi, Yseult Hb, Timm Haucke, Toryn Schafer, Dan Morris, Aakash Gupta, Juliana Gomez Consuegra, Anastasios Angelopoulos, mimi, Felipe Montealegre-Mora, Jacob Marks, Lingchao Mao, Aleksis Pirinen
:zebra_face: Blair Costelloe, Cameron Trotter, Taiki Sakai - NOAA Affiliate, Mitchell Rogers, Andrew Schulz, Jacob Marks
👏 Katie Zacarian, Steve Haddock, Urs
👍 Hsun-Yi Hsieh, Jon Van Oast, Omiros Pantazis, Valentin Gabeff
🙏 Anastasios Angelopoulos
🎉 Pen-Yuan Hsing
Blair Costelloe (blaircostelloe@gmail.com)
2023-08-03 07:29:49

*Thread Reply:* Awesome! Dan Rubenstein was telling me about this in Anchorage a couple of weeks ago. Do you think it can work in nadir footage of zebras, or is the oblique viewpoint necessary?

Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-08-03 07:47:03

*Thread Reply:* Thanks. The network as is will most likely need a small bit of fine tuning, but that is what this paper is all about.. with a small number of manual annotations you can kick start the whole process and have the network tuned for your dataset. If you want, you could send us a few pictures to try out quickly.

Blair Costelloe (blaircostelloe@gmail.com)
2023-08-03 07:54:17

*Thread Reply:* If it's not too much trouble it would be really interesting to know how it works OOTB with my data. I'll send a few example frames to your email. Thanks!

Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-08-03 07:54:46

*Thread Reply:* No prob! Looking forward to it.

Dan Morris (agentmorris@gmail.com)
2023-08-03 12:55:09

*Thread Reply:* Really neat dataset! I'm able to click through to some of the individual images, but if I try to download the dataset (or one of the top-level folders), I get "size too large". I don't know how Keeper works... do you have a recommendation for downloading the dataset?

Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-08-03 14:12:02

*Thread Reply:* Thanks! There is a server configuration limitation apparently (as the folders are about ~100GB). I am trying to get it resolved through the MPI admins. In the meantime, I will figure out another way to share it with you directly.

Dan Morris (agentmorris@gmail.com)
2023-08-03 14:50:22

*Thread Reply:* Thanks! No rush to share with me directly... I'm just wearing my dataset-list-making hat, and part of that is making sure datasets I add to various lists are downloadable. I'll be your first download beta-tester when you get it all worked out.

👍 Aamir Ahmad, Jason Holmberg (Wild Me)
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2023-08-03 15:24:39

*Thread Reply:* I would love to get a copy too. Thank you!

Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-08-30 10:40:05

*Thread Reply:* Hi @Dan Morris and @Jason Holmberg (Wild Me).. we solved this download issue with a workaround. There is now a zipped version of each folder in the root folder. You can download the zips individually. Downloading the folder was not working apparently because of some buffer size limit issue, but now you do not need to download the folder. However, I leave the folders also there so people can browse through individual files if they want. Let me know if there is still some issue.

Dan Morris (agentmorris@gmail.com)
2023-09-01 21:21:32

*Thread Reply:* Thanks! This is straightforward to work with; I confirmed I could read the annotations and render boxes on a sample image, then added to the running list of aerial/drone wildlife datasets:

https://github.com/agentmorris/agentmorrispublic/blob/main/drone-datasets.md#large-scale-semi-automatic-inferen[…]mal-behavior-from-monocular-videos

While I was there, I also added another one from the backlog, from Koger et al, ~40k boxes on ungulates and geladas in drone images:

https://github.com/agentmorris/agentmorrispublic/blob/main/drone-datasets.md#quantifying-the-movement-behaviour[…]s-using-drones-and-computer-vision

😎 Jason Holmberg (Wild Me)
:zebra_face: Blair Costelloe, Aamir Ahmad
👍 Aamir Ahmad
Carl Boettiger (cboettig@berkeley.edu)
2023-08-09 13:51:05

Hi friends -- what's the state of the art these days for camera trap metadata? Do people use schema.org or EML, or other custom formats?

Jinsu Elhance (jelhance@gmail.com)
2023-08-09 16:49:37

*Thread Reply:* Hi Carl! Just pinging @Nathaniel Rindlaub for any insights?

👋 Carl Boettiger
Dan Morris (agentmorris@gmail.com)
2023-08-09 18:20:38

*Thread Reply:* I'm aware of two standards... the Camera Trap Metadata Standard (CTMS):

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5267527/

...was used by eMammal; I'm not aware of current systems that use this as a native format.

If I were putting my eggs in one basket, it would be the newer Camtrap DP standard:

https://tdwg.github.io/camtrap-dp/

...which (according to the page, I've not personally verified this) is exported by Agouti and Trapper, and supported as an input format for GBIF.

Approximately 7000 people, approximately 3000 of whom are on this Slack, wrote a paper about it:

https://ecoevorxiv.org/repository/view/5593/

Given that many of those authors were also authors on the original CTMS standard, I think it's safe to declare the CTMS deprecated.

PubMed Central (PMC)
tdwg.github.io
🙌 Carl Boettiger
💚 Carl Boettiger
Riley Knoedler (mknoedler@west-inc.com)
2023-08-09 15:57:29

Hi folks, I'm Riley Knoedler, I heard about the slack at CVPR and I'm very excited to join the community! I'm a Data Scientist at Western EcoSystems Technology, and most of my team's work focuses on developing machine learning solutions for monitoring wildlife interactions with renewable energy facilities.

👏 Jon Van Oast, Jacob Marks
Ben Weinstein (benweinstein2010@gmail.com)
2023-08-09 16:38:11

*Thread Reply:* Welcome to the community, what would you say the largest challenge your team faces? Model development? annotations? We’ve done some wind farm bird monitoring work, just curious.

Riley Knoedler (mknoedler@west-inc.com)
2023-08-10 10:51:54

*Thread Reply:* Probably sufficient annotations and distinguishing near misses from collisions. What kind of challenges has your team faced?

Ben Weinstein (benweinstein2010@gmail.com)
2023-08-10 14:02:15

*Thread Reply:* We are on surveys before active energy development, so we have to cover large areas to look for birds in proposed wind areas. Large data sizes and very long species list.

Riley Knoedler (mknoedler@west-inc.com)
2023-08-10 14:18:24

*Thread Reply:* That's really interesting, what are your survey methods? Are you using cameratraps looking up at the sky or drones or something else?

Jinsu Elhance (jelhance@gmail.com)
2023-08-09 16:51:07

Hey all, curious if anyone has come across a tool that allows you to view very high resolution true color imagery and visually estimate fractional cover as an input to a remote sensing model that maybe uses coarser imagery?

Sara Beery (sbeery@caltech.edu)
2023-08-09 16:58:53

*Thread Reply:* @Dan Morris

Dan Morris (agentmorris@gmail.com)
2023-08-09 18:15:18

*Thread Reply:* I don't know of any tools that do this, though you could certainly use existing high-resolution (let's say 1m) tree cover maps as training data for a system you'll eventually run with coarser (10m?) imagery. Consider the Chesapeake Land Cover dataset:

https://lila.science/datasets/chesapeakelandcover

...which is really an older version of:

https://www.chesapeakeconservancy.org/conservation-innovation-center/high-resolution-data/lulc-data-project-2022/

...or the NOAA Coastal Change Analysis Program's high-resolution map:

https://catalog.data.gov/dataset/coastal-change-analysis-program-c-cap-high-resolution-land-cover-and-change-data

That's a lot of geographic bias, but it's a place to start. I'm not aware of other large (i.e., larger than one city) high-resolution land cover or tree cover datasets, though I don't claim that I would know of others.

LILA BC
Written by
lilawp
Est. reading time
6 minutes
Chesapeake Conservancy
Est. reading time
4 minutes
Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-08-10 10:14:40

*Thread Reply:* check out the Segment-Anything-model. you can model it to annotate the tree-area or building foot-print and then pipe it to your model with lower resolution imagery

Ruben Remelgado (ruben.remelgado@gmail.com)
2023-08-11 05:40:05

*Thread Reply:* https://www.nature.com/articles/s41586-018-0411-9

A worthwhile read on this topic. You can (and should) replace modelling components, but the workflow is still valid.

Nature
Riley Knoedler (mknoedler@west-inc.com)
2023-08-10 14:33:15

Has anyone ever leveraged the computer vision models used by iNaturalist or eBird to process large amounts of data locally? I'm trying to figure out if that is a possible / permissible use case.

Carly Batist (cbatist@gradcenter.cuny.edu)
2023-08-10 14:53:19

*Thread Reply:* @gvanhorn ?

Oisin Mac Aodha (macaodha@caltech.edu)
2023-08-10 16:19:01

*Thread Reply:* There is a good performing model training on the iNat2021 competition dataset now available on HuggingFace. Worth checking out, there is also an online demo on the right of this page: https://huggingface.co/timm/eva02_large_patch14_clip_336.merged2b_ft_inat21

huggingface.co
👍 Riley Knoedler, Dan Morris, Sara Beery, Emilio Luz-Ricca
Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-08-11 16:30:07

Hi all, we are now back in the Hungarian Steppe to deploy and test our autonomous blimps for biodiversity monitoring (here Przewalski's horses in particular). Feel free to follow updates regarding this on my LinkedIn page (https://www.linkedin.com/posts/ahmadaamir_autonoumous-blimp-day1-activity-7095860403892559872-AZJj?utm_source=share&utm_medium=member_desktop)

linkedin.com
👀 Pen-Yuan Hsing, Yseult Hb, Alasdair Davies
Carly Batist (cbatist@gradcenter.cuny.edu)
2023-08-13 12:04:38

https://arxiv.org/abs/2201.11192

arXiv.org
👍 Aamir Ahmad, Sara Beery, Timm Haucke, Björn Lütjens
Caleb Robinson (calebrob6@gmail.com)
2023-08-13 14:28:54

*Thread Reply:* PyTorch Dataset available in TorchGeo -- https://torchgeo.readthedocs.io/en/latest/api/datasets.html#reforestree

🙌 Carly Batist
Emily Lines (erl27@cam.ac.uk)
2023-08-14 06:09:08

*Thread Reply:* Hmmm... this is a nice idea but given the known high levels of uncertainty of applying general allometric equations (particularly in the tropics, and particularly for equations without height as an input), I hope others would also search out e.g. TLS or destructively sampled ground truth datasets, which are often available. It would also be great to see these datasets attempt to compute the uncertainties in their biomass measurements.

I do find it disappointing that the title discusses 'tropical forest carbon stock' when this is very specifically a small scale dataset of regional tropical agro-forestry. Monitoring agro-forestry for NBS payments is a very worthy aim in itself, and I think it would help the community progress if manuscripts/datasets were more specific about their scope...

👍 Björn Lütjens, stefano puliti
Björn Lütjens (bjoern.luetjens@gmail.com)
2023-08-26 12:54:56

*Thread Reply:* Hi emily, I was part of the paper and unfortunately it's too late to rename but I agree that agroforestry would have been a more appropriate title! good point thank you

Ben Weinstein (benweinstein2010@gmail.com)
2023-11-06 16:36:40

*Thread Reply:* Following up here, were all trees intended to be annotated? Or just ground truth trees with biomass? https://github.com/gyrrei/ReforesTree/issues/5

Patrick Beukema (patrickb@allenai.org)
2023-08-14 11:05:32

Hi all, we hosted David Rolnick a few weeks ago in our Environmental AI series. He delivered an inspired talk on applying ML for climate action. Among his many incisive remarks: “AI for Good” doesn’t mean just adding new “good” applications of AI. It means shaping all applications of AI to be better for society. Check it out! https://www.youtube.com/watch?v=rs4MpjNxOLQ

YouTube
} Allen Institute for AI (https://www.youtube.com/@allenai)
❤️ Caleb Robinson, Katelyn Morrison, Ditiro Rampate, Juliana Gomez Consuegra, Aakash Gupta, Jason Holmberg (Wild Me), Yseult Hb, Emilio Luz-Ricca, Michael Bunsen
🎉 Michael Bunsen
Andrew Schulz (akschulz@gatech.edu)
2023-08-16 10:05:21

Hi All! After a long year of writing, editing, and proofing with the amazing @Suzanne Stathatos and more! We have our perspectives piece out on conservation tools (this is not particularly for people in this group, but I think everyone can learn something from the paper)...This paper more serves as a starter guide on some of the new techniques, tools, and technologies used for conservation. We include a vocab list of some of the common words thrown around as well as a case study highlighting some of the excellent work shared throughout this group. Hope you enjoy it! Check out the paper here: https://doi.org/10.1098/rsif.2023.0232

🎉 Avi Sundaresan, Elie Alhajjar, Elizabeth Bondi-Kelly, Sara Beery, Devis Tuia, Rowan Converse, Yuanqi Du, Timm Haucke, Omiros Pantazis, Taiki Sakai - NOAA Affiliate, Nico Lang, Michael Procko, Jon Van Oast, Shir Bar, Anastasios Angelopoulos, Sam Lapp, Yseult Hb, Dan Morris, Jason Holmberg (Wild Me), Emilio Luz-Ricca
❤️ Suzanne Stathatos, Maddie Cusimano, Taiki Sakai - NOAA Affiliate, Eric Greenlee, Anastasios Angelopoulos, Justine Boulent, Pranav Khandelwal, Elie Alhajjar, Alison Ketz, Pen-Yuan Hsing, Jason Holmberg (Wild Me), Talia Speaker
👍 Emily Lines, Valentin Gabeff, Anastasios Angelopoulos, Casey Youngflesh, Elie Alhajjar, Aamir Ahmad
charlotte (deshchang@gmail.com)
2023-08-16 14:58:02

Hi everyone! I’m Charlotte Chang, and it’s great to be here! I’m an Assistant Professor at Pomona College, and my work has used natural language processing to look at conservation communications on social media. I’d love to chat with anyone who is interested in NLP and conservation (communications). I’ve also used bioacoustics and CV models in my (undergraduate) teaching.

👋 Devis Tuia, Toryn Schafer, Ankita Shukla, Andrew Schulz, Sara Beery, Omiros Pantazis, Abhay, Dan Morris, Emerson de Lemmus, Valentin Gabeff, Emilio Luz-Ricca, Talia Speaker, Lauren Harrell
👍 Bistra Dilkina, Aakash Gupta, Emerson de Lemmus, Jason Holmberg (Wild Me)
👏 Katie Zacarian, Emerson de Lemmus, Tom Wye (Fishial.ai), Jason Holmberg (Wild Me)
👍:skin_tone_3: Pen-Yuan Hsing
🙌 Maia Adar, Jason Holmberg (Wild Me)
Taiki Sakai - NOAA Affiliate (taiki.sakai@noaa.gov)
2023-08-16 15:12:14

*Thread Reply:* Hi Charlotte! I don't do any NLP work, but I went to Harvey Mudd and work in bioacoustics 🐋

💯 charlotte
❤️ charlotte
Abhay (abhaykash12@gmail.com)
2023-08-16 17:40:09

*Thread Reply:* Hi Charlotte! My background is primarily in NLP and I've done some work in the AI x Conservation space (cv). Happy to chat more if I you think i'd be a useful resource! :)

💯 charlotte
👋 charlotte
Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-08-21 04:03:34

Hi Guys, just wanted to update on our fantastic field trip!

Our airship was able to visually track and autonomously follow Przewalski's horses roaming free in the Steppe. In the video below, we show how the blimp 'fights' the wind to keep the horses centered in its field of view. We are a step closer to do long term monitoring with autonomous robots. Some more info in the post here: https://www.linkedin.com/posts/ahmadaamir_hungarian-steppe-airship-activity-7099298534696312832-EC-k?utm_source=share&utm_medium=member_desktop . Congrats @Eric Price and @Pranav Khandelwal for an awesome teamwork this summer!

linkedin.com
👍 Eric Price, Jason Holmberg (Wild Me), Urs, Aakash Gupta, Sara Beery, Timm Haucke, Emilio Luz-Ricca, Yuru Jia, Rebecca Wilks
🙌 Blair Costelloe, Jason Holmberg (Wild Me), Timm Haucke, Yseult Hb
Andrew Schulz (akschulz@gatech.edu)
2023-08-22 03:41:26

Hi All - I did not know where to put this, but I wanted to highlight an incredible ecology researcher Dr. Gabriela Palomo. Specifically wanted to link everyone to her website (https://gabspalomo.github.io/). Why am I putting this in the AI for Conservation slack, well I know many of us use drones, camera traps, GIS, etc. and she has a bunch of incredible silhouettes (https://gabspalomo.github.io/silhouettes.html) that are under a Creative Commons Attribution-NonCommercial 3.0 Unported license (see example)! Happy science-ing!

😍 Leonardo Viotti, Yseult Hb, Jason Holmberg (Wild Me), Timm Haucke, Hayley Rechter, Björn Lütjens, Talia Speaker
👍 mimi, Jason Holmberg (Wild Me), Rowan Converse, Timm Haucke, Hayley Rechter
Pietro Perona (perona@caltech.edu)
2023-08-22 14:31:16

A take-home message I get from the Kerner-Nakelembe presentation is that (a) crop yields is a crucial issue, (b) people are unclear on how to impact yields (the factors are clear: land fractioning, use of fertilizers, use of well selected seeds etc. but affecting those factors requires changes in society, in the economy etc.). I could not tell if it is possible to estimate crop yields (and thus track progress) from satellite images.

Eric Price (eric.price@ifr.uni-stuttgart.de)
2023-08-23 02:06:24

*Thread Reply:* From what I understand, the relation between sat image and yield is highly crop specific, so some reference ground data is necessary to calibrate the estimation for a specific type of crop, but there's quite a lot of work on that topic: https://scholar.google.com/scholar?q=related:2qpIflWO5X8J:scholar.google.com/&scioq=estimating+crop+yields+from+satellite+images&hl=en&as_sdt=0,5

Maddie Cusimano (maddie@earthspecies.org)
2023-08-23 11:39:50

Hey everyone, Earth Species Project is partnering with the Footprint Coalition to provide grants for research at the intersection of ML and interspecies communication, including applications to conservation. This program will issue fast-grants of up to $10,000 to researchers to do work in this domain, covering anything from dataset design to processing for machine learning, field studies of interspecies communication or developing new ways to investigate non-human signals.

A detailed description of this funding program and instructions for applying can be found on the Experiment.com website here. Students, postdocs and researchers outside of universities are all encouraged to apply.

The deadline is currently Sept 22, with rolling admissions. Please get in touch if you have questions 😊

👀 Yseult Hb, Ankita Shukla, Talia Speaker
😎 Jason Holmberg (Wild Me)
❤️ Marius Miron
Atriya Sen (atriya@atriyasen.com)
2023-08-24 16:59:20

Hello folks, I have a research assistantship position available for an MS or PhD student (new admission) in computer science at the University of New Orleans.

The position will be initially funded by a recent National Science Foundation award, details of which (including an abstract of the project) may be found here: NSF Award Search: Award # 2246032 - CRII: III: Explainable Artificial Intelligence for Biodiversity Science & Conservation

I'm relatively flexible about the research agenda though.

The position comes with full tuition coverage, benefits, and a monthly stipend of $2000. Please email me at asen@uno.edu with the following documents in a single PDF file:

  1. CV,
  2. Academic transcripts,
  3. Half-page statement of interest. The position is also advertised here: https://www.uno.edu/academics/cos/computer-science/open-positions-in-computer-science
Katelyn Morrison (kcmorris@andrew.cmu.edu)
2023-08-24 17:11:06

*Thread Reply:* Hi is it okay if I share this in the Explainable AI slack channel that I am a part of??

Katelyn Morrison (kcmorris@andrew.cmu.edu)
2023-08-24 17:11:46

*Thread Reply:* I'm currently a PhD student but super interested in the project topic. Would love to chat sometime about what you are working on 🙂

❤️ Sara Beery
Atriya Sen (atriya@atriyasen.com)
2023-08-24 17:35:10

*Thread Reply:* Please do share; thanks!

Devis Tuia (devis.tuia@epfl.ch)
2023-08-25 02:46:57

*Thread Reply:* Is there an xAI channel? 😮

Katelyn Morrison (kcmorris@andrew.cmu.edu)
2023-08-25 09:56:47
Irina Tolkova (itolkova@g.harvard.edu)
2023-08-28 09:19:33

Hi everyone! Two upcoming postdoctoral fellowship application deadlines:

  1. The first is the Christopher W. Clark Postdoctoral Fellowship in Conservation Bioacoustics at the K. Lisa Yang Center for Conservation Bioacoustics at the Cornell Lab (revised deadline October 15). The selected candidate will be the inaugural recipient of this three-year fellowship. Candidates will propose innovative and independently developed bioacoustics applications in service of conservation, honoring the legacy of the fellowship’s namesake and program’s founding director, Christopher Clark. While Chris Clark primarily focused on baleen whales, scientists at the Yang Center apply bioacoustic methods for studying marine mammals, fishes, birds, primates, frogs, bats, elephants, insects, anthropogenic sounds, and more in ecosystems around the world! Yang Center staff collect and synthesize data, develop hardware and software tools, and work with partners and students around the world to facilitate the use of bioacoustics in conservation. We encourage proposals with bold, potentially transformative research potential, and especially welcome projects that leverage the tools or research developed at the Center to have an impact on biodiversity conservation (e.g., tool development, knowledge generation, implementation etc.).

In addition to salary support and annual raises, applicants can anticipate approximately $10K/year in research support, with the possibility of additional funds through collaboration with ongoing projects at the Center. For the full posting and to apply, please visit: https://academicjobsonline.org/ajo/jobs/24693.

In the cover letter, applicants should suggest one or more members of the Center as a potential postdoctoral advisor. This person is not required to be a direct research collaborator, but will provide administrative support, professional guidance, and a connection into the Yang Center and Lab of Ornithology. Any member of the Center can serve as a secondary or co-mentor. Primary advisors can include: Dena Clink, Ben Gottesman, Daniela Hedwig, Holger Klinck, Aaron Rice, Larissa Sugai, Laurel Symes, and Connor Wood. Please reach out to us with questions!

  1. The second upcoming fellowship is the Edward W. Rose postdoctoral fellowship (deadline September 9). This fellowship supports postdoctoral research at the Cornell Lab of Ornithology. We encourage you to learn more about this fellowship and to share the information with your colleagues and research networks. As part of the Cornell Lab of Ornithology, the K. Lisa Yang Center for Conservation Bioacoustics can host Rose postdoctoral fellows who are conducting research that is synergistic with the research conducted by the Yang Center, even if fellows are working on taxa other than birds.

Candidates are welcome to apply to both positions, while noting that the Yang Center Clark Fellowship is particularly focused on bioacoustics and conservation, and that the Rose Fellowship supports positions across the Lab of Ornithology, is open to topics beyond bioacoustics, and can have a focus on basic or applied research.

If your research advances bioacoustics research and conservation, we encourage you to contact us and apply to either or both programs!

Cornell Chronicle
academicjobsonline.org
Birds, Cornell Lab of Ornithology
👍 Holger Klinck, Subhransu Maji, Ian Ingram
😍 Sara Beery, Andrew Schulz
🐋 Taiki Sakai - NOAA Affiliate
slackbot
2023-08-28 23:55:54

This message was deleted.

:spam: Atul Ingle, Declan
Justin Kay (justinkay92@gmail.com)
2023-08-29 01:17:42

*Thread Reply:* Hey @Arvin Sun this is a bit spammy and not relevant to this community. Can you please take the post down?

Sara Beery (sbeery@caltech.edu)
2023-08-29 01:33:15

*Thread Reply:* I deleted

👍 Justin Kay, Jason Holmberg (Wild Me), Katelyn Morrison, Declan, Shir Bar
❓ Jon Van Oast
Sara Beery (sbeery@caltech.edu)
2023-08-29 01:33:33

*Thread Reply:* Thanks @Justin Kay for the heads up

👍 Justin Kay, Jason Holmberg (Wild Me), Katelyn Morrison, Shir Bar
Casey Colson (caseycolson24@gmail.com)
2023-08-29 23:24:58

Hi all - by your judgment, what proportion of papers/projects cited in this slack channel refer terrestrial species vs. ocean species? Just curious. Anecdotal guesses are fine. Tx.

Malte Pedersen (mape@create.aau.dk)
2023-08-30 02:29:47

*Thread Reply:* Hi Casey, generally speaking, there is "a lot" more going on in the terrestrial domain when it comes to machine learning and computer vision. A simple reason for that is the hostile environment; it is just much harder to obtain data under water. However, we have a channel called #marine where we post things related to, you guessed it, marine stuff and ocean species! Please, come join us if you are interested in the wet domain 🌊🐟

👍 Eric Colson
Cameron Trotter (cater@bas.ac.uk)
2023-08-30 04:39:52

*Thread Reply:* Unsure I'd be able to give a ball park figure, but marine-focused research makes up a relatively small proportion of work in this domain.

Anecdotally, I am in the process of writing up a literature review of CV applications to benthic environments, and when I discuss future direction, a lot of the papers I talk about focus on terrestrial data rather than from other marine environments. This is mostly due to the higher abundance of terrestrial data allowing for greater research scope.

It's relatively cheap to set up some land-based camera traps and leave them in an area for a few months, but marine data can be expensive to obtain, requires a large human effort, and often needs specialist equipment, especially at depth.

👍 Eric Colson
Levi Cai (lcai@whoi.edu)
2023-08-30 11:02:18

*Thread Reply:* Agreed with above. In terms of number of labelled images that can be used to train detectors for instance, terrestial has several ordered of magnitude more publicly available labelled data.

Levi Cai (lcai@whoi.edu)
2023-08-30 11:03:42

*Thread Reply:* And even then, quite a bit of the available ocean data is surface-based imagery of animals like whales, turtles, and sharks/rays.

👍 Eric Colson
Patrick Beukema (patrickb@allenai.org)
2023-08-30 11:57:08

*Thread Reply:* Fascinating discussion for those of us working in the marine space. We get asked all the time “how many fisheries are unsustainably fished” even just a simple count of “how many fish are there” is incredibly challenging. There is an older article from the Atlantic on the topic: http://webcache.googleusercontent.com/search?q=cache:https://www.theatlantic.com/science/archive/2016/10/how-many-fish-are-in-the-sea/502937/ There is a quote in there which stuck with me:

“1970s with John Shepherd, a fisheries management specialist at England’s University of Southampton: Counting fish is like counting trees, but the trees are invisible and constantly on the move.”

This topic is really critical to us, and we would like to do a better job at answering these types of questions, and more broadly linking our work (Skylight) to these metrics where possible. I spoke to recently about this question who has done a ton of work in this space. He’s not in this slack group, but he would be a great person to contact, he has an extremely nuanced and pragmatic view, the one number he cited that stuck with me is 60-70% of the fisheries they have studied are unsustainably harvested (i.e. will eventually collapse). (I am not an expert in this space, but trying to get up to speed).

@Cameron Trotter I am very interested in that lit review, if I can help you in any way, let me know.

scholar.google.com.au
🙏 Eric Colson
Cameron Trotter (cater@bas.ac.uk)
2023-08-30 12:07:17

*Thread Reply:* @Patrick Beukema more than happy to send it your way when I'm able. Would be good to keep in touch if you're working in the space 🙂

Patrick Beukema (patrickb@allenai.org)
2023-08-30 12:15:33

*Thread Reply:* yeah that sounds great. maybe we could even set up a talk or something? A lot of people on our team would be interested. Its quite a complex space to model/get good data on with all the different dynamics, human behavior, stats, a lot of noise.

Malte Pedersen (mape@create.aau.dk)
2023-08-30 12:33:17

*Thread Reply:* I would be interested in this as well. I am currently working on measuring whether/to what degree trawlers affect the seafloor and marine habitats in the North Sea west of Denmark. Also, for those of you who are interested, I will defend my PhD on September 8. The topic is detection and tracking of fish using computer vision from controlled lab environments to the wild. I will give a 45 minute presentation about my work aimed at a broad audience who doesn't necessarily know anything about computer vision and machine learning (so a lot of images and very few equations). I will put up a zoom link in #marine next week.

🙌 Patrick Beukema, Cameron Trotter, Shir Bar, Eric Colson, Levi Cai, Alexander Merdian-Tarko
📆 Cameron Trotter
Elizabeth Campolongo (e.campolongo479@gmail.com)
2023-08-31 14:53:51

*Thread Reply:* Have you checked out fathomnet for undersea images?

🐟 Malte Pedersen, Eric Colson
Cameron Trotter (cater@bas.ac.uk)
2023-09-01 04:49:48

*Thread Reply:* @Elizabeth Campolongo I have yet to make use of it, though it is on my todo list. I work with Antarctic benthic imagery however, which is an area that even by marine data standards can be somewhat sparse, so I am unsure if it would help in my specific situation, though it may be a good resource for those who work in more explored waters. I'd be interested to hear from those who have used it in the past for their work, it could be an extremely valuable resource!

Sara Beery (sbeery@caltech.edu)
2023-08-30 21:55:05

https://twitter.com/CstGhosh/status/1695350270958490069

X (formerly Twitter)
👍 Justin Kay, Jason Holmberg (Wild Me), Eric Colson, Omiros Pantazis, Oisin Mac Aodha, Shir Bar, Elizabeth Campolongo
💕 Jon Van Oast, Katelyn Morrison, Caleb Robinson
🌏 Rowan Converse
🙏 Patrick Beukema, Alex Brace
Paul Allin (allinpaul@gmail.com)
2023-09-05 10:32:39

Hi everyone, I’m looking at using ML to count wildlife in an open system and thinking about ways of reducing the amount of data I need to process. Does anyone here know if I can take two images say 1 day apart and subtract the two from each other to only be left with the not stationary objects? hope this makes sense…

Maximilian Schall (Maximilian.Schall@hpi.de)
2023-09-05 10:41:33

*Thread Reply:* Hey Paul,

Given that the camera is stationary: the keyword here would be background subtraction. We had some experiments with videos, where it works quite well. It depends on the scenario and your requirements if it works good enough with two images.

Devis Tuia (devis.tuia@epfl.ch)
2023-09-05 10:42:38

*Thread Reply:* hey Paul, it seems like a reasonable assumption at first, but there are some caveats.. you need to be able to fly twice (seems that you do) and you need to aqcuire the images athte same time of the day (to avoid shadows or illumination strong changes). You also want to avoid fyling on a sunny day vs a cloudy one (you are in South Africa right?). Finally, you also will need very precise co-registration of every pixel, otherwise you will detect a lot of borders as objects.

😲 Rita Pucci
Devis Tuia (devis.tuia@epfl.ch)
2023-09-05 10:55:37

*Thread Reply:* on a stationary camera this is ok-ish (as Maximillian says), but you seem to eb working with drones, right?

Quentin Bateux (quentin.bateux@yale.edu)
2023-09-05 11:24:52

*Thread Reply:* To add on the previous suggestions: if you have small uncontrolled camera motion, you may need a preprocessing step where you perform 'image alignment' before running the background subtraction process.

👍 Sara Beery
Paul Allin (allinpaul@gmail.com)
2023-09-05 11:47:34

*Thread Reply:* Thanks for the swift responses, yes this would be from a drone and I was thinking that with real-time white balance I could correct the lighting for each image. But I do understand that slight variations in height will result in different pixel sizes but currently they are around 5cm so slight difference in height won’t have a big impact on resolution

Dan Morris (agentmorris@gmail.com)
2023-09-05 17:59:03

*Thread Reply:* I'm interested whether drone folks know of anything off the shelf for this... though Devis is definitely "drone folks". 🙂 So the fact that he didn't say "use package X that everyone knows about" suggests it's not an off-the-shelf thing. And it seems like even if you have GPS on the drone, to even try background subtraction, you need registration that's tighter than what you'll get from an off-the-shelf georeferencing tool. Although I say that with no basis in reality, I've never tried.

But assuming you will have to be in the business of writing code, and assuming you want to conduct said business in Python, I would personally start with OpenCV's RANSAC implementation (RANSAC is a common algorithm for aligning images based on keypoints):

https://docs.opencv.org/3.4/d1/de0/tutorial_py_feature_homography.html

Good tutorial here:

https://learnopencv.com/image-alignment-feature-based-using-opencv-c-python/

Even once you have aligned images, background subtraction is a game of heuristics: this is hard even for camera traps, which are literally bolted to a tree. But if your objects are large and salient relative to random other things that change between images (shadows, moving grass, things getting wet, etc.), doable.

Definitely let this Slack know what you learn!

👆 Quentin Bateux
Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-09-05 18:08:45

*Thread Reply:* Although I have never tried background subtraction with drone images, my gut feeling is that the precise alignment is going to be very difficult, especially with off the shelf drones. If you have access to high frequency telemetry (self-built drones), it might be of some help. If you are flying off-the-shelf, perhaps anafi is good option to go with. They are cheap and come with very 'accessible' telemetry and programmable control methods -- so you can program paths to fly repeatedly, for example.

also, going back to the original motivation -- would be great to know the actual context/scale of data reduction necessity. Perhaps there a completely alternative solution may emerge.

Jan Kees (jankees.schakel@sensingclues.org)
2023-09-06 02:30:23

*Thread Reply:* hi Paul, it does matter what type of wildlife you want to count. Are you looking at elephant and rhinos (as a must), and at lion, cheetah and other (smaller to much smaller) species as a nice to have?

Paul Allin (allinpaul@gmail.com)
2023-09-06 10:23:25

*Thread Reply:* I definitely intend to explore this further based on the responses so far but not putting all my eggs in one basket. The drone I will be using is a custom built fixed-wing with very accurate on-board gps but perhaps even that is not sufficient. Seems like someone has to go out and try 🙂 @Jan Kees I am looking at ‘large’ mammalian herbivores, i.e. impala and bigger

👍 Jan Kees
Thijs (thijs@q42.nl)
2023-09-06 04:02:25

For those interested, here is a (technical) writeup of a project we have done in the Carpathian mountains to try and keep bears 🐻 out of villages and prevent human-wildlife conflicts. In many ways this is similar to human-elephant-conflicts, but there were some distinct (technical) challenges we had to deal with, for example Romania is not as sunny ☀️ as Africa :)

https://engineering.q42.nl/ai-bear-repeller/

Q42 Engineering
Written by
Thomas Broch
Filed under
Artificial Intelligence, Creative tech
👍 Martin Marzidovsek, Nicolas Arrieta Larraza, Omiros Pantazis, Timm Haucke, Anastasios Angelopoulos, Paul Melki, Prabath Gunawardane, Dan Morris, Alasdair Davies
❤️ Avi Sundaresan, Yseult Hb, Justine Boulent, Timm Haucke, Leonardo Viotti, Marius Miron, Regina Eckert, Anjana Sengupta, Jon Van Oast, Anastasios Angelopoulos, Paul Melki, Chuck Stewart, Steve Haddock, Aakash Gupta
🐻 Anastasia Pagán, Rowan Converse, Anastasios Angelopoulos, Valentin Gabeff, Paul Melki, Varshani Brabaharan, Talia Speaker
Thijs (thijs@q42.nl)
2023-09-06 04:03:55

*Thread Reply:* Just look at one of the beautiful images our system captures, this one is from this morning ❤️

🤯 Jon Van Oast, Anastasia Pagán, Anastasios Angelopoulos, Varshani Brabaharan
Nicolas Arrieta Larraza (n.arrieta.larraza@gmail.com)
2023-09-06 04:34:42

*Thread Reply:* Awesome work! 😊

❤️ Thijs
Ștefan Istrate (stefan.istrate@gmail.com)
2023-09-06 07:23:50

*Thread Reply:* Well done! 👏

Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-09-08 07:31:55

*Thread Reply:* This is great! Through NASSCOM and TAIM We are organizing an online roundtable on Realtime monitoring of Wildlife, would you be interested. I have dropped you a DM pls check.

Carly Batist (cbatist@gradcenter.cuny.edu)
2023-09-06 18:28:55

Does anyone have a go-to deep learning/neural network infographic or figure that does a really good job of giving a VERY high-level summary geared towards a non-ML audience?

Patrick Beukema (patrickb@allenai.org)
2023-09-06 19:18:17

*Thread Reply:* this is my go to: https://playground.tensorflow.org/#activation=tanh&batchSize=10&dataset=circle&regDataset=reg-plane&learningRate=0.03&regularizationRate=0&noise=0&networkShape=4,2&seed=0.14028&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false|https://playground.tensorflow.org/#activation=tanh&batchSize=10&dataset=circle&regDataset=[…]se&problem=classification&initZero=false&hideText=false not sure if the right level though.

playground.tensorflow.org
👍 Carly Batist
Gracie Ermi (gracieermiifthen@gmail.com)
2023-09-06 19:31:09

*Thread Reply:* I've turned this into a gif in the past: the visualization right at this point in this code.org video is pretty good. You might be looking for something more explanatory of the nuts and bolts, but I feel like it visualizes the idea of model training in a helpful way: https://youtu.be/KHbwOetbmbs?t=126

YouTube
} Code.org (https://www.youtube.com/@codeorg)
👍 Carly Batist
Roni Choudhury (roni.choudhury@kitware.com)
2023-09-08 11:29:52

*Thread Reply:* i don't know if this is the right level/tone: https://www.linkedin.com/posts/bettymohler_post-ugcPost-7018185045085409280-YW_J/

linkedin.com
👍 Carly Batist
Tiziana Gelmi Candusso (tiziana.gelmi@gmail.com)
2023-09-08 12:42:54

I recently had an email from a colleague wanting to introduce megadetector into their camera trap framework, and I know how we did it in 2021, but I disconnected a bit this year from the species classification matters to analyze the data and write papers, and everything moves so quickly in the AI arena that I wanted to make sure I give them the latest advice. Are there any recent developments on the UI front for run megadetector on large datasets?

Bistra Dilkina (dilkina@usc.edu)
2023-09-08 14:48:46

*Thread Reply:* Following, as I similarly have a collaborator organization interested in using it with a UI

Peter van Lunteren (contact@pvanlunteren.com)
2023-09-08 15:19:42

*Thread Reply:* I created a GUI to deploy MegaDetector and train custom species detectors: https://github.com/PetervanLunteren/EcoAssist

Stars
64
Language
Python
❤️ Tiziana Gelmi Candusso, Sepand Dyanatkar
Dan Morris (agentmorris@gmail.com)
2023-09-08 16:22:40

*Thread Reply:* EcoAssist (which Peter referred to) is the most common GUI for running MegaDetector locally. CamTrap Detector (https://camtrap.net/detector) is also quite polished.

I track all GUIs and other non-traditional ways of running MegaDetector here:

https://github.com/agentmorris/MegaDetector/blob/main/megadetector.md#is-there-a-gui

We also have a 100.000% success rate at helping ecologists get MD running at the command line :), which is sometimes still the best option, depending on the project.

Always feel free to point people who have questions to cameratraps@lila.science (that's me and Siyu), we love hearing from users and can usually quickly route people to the right tools given their specific project requirements.

💚 Carl Boettiger, Carly Batist, Toryn Schafer
❤️ Tiziana Gelmi Candusso
👍 Bistra Dilkina, Cameron Trotter
Tiziana Gelmi Candusso (tiziana.gelmi@gmail.com)
2023-09-08 22:33:45

*Thread Reply:* Thank you Dan! I will point them towards the gui and eventually the option of sending you guys an email, I was hesitant about sending them directly to the latter, didnt want to overload you all.

Sara Beery (sbeery@caltech.edu)
2023-09-08 12:54:43

On our end it's still the same public GitHub repo

Sara Beery (sbeery@caltech.edu)
2023-09-08 12:55:31

There are other tools people have developed around it, but since I just run direct from the code I can't speak to the user experience. This would be a good question to ask in the AI for Conservation slack

💯 Tiziana Gelmi Candusso
Vardaan Pahuja (vardaanpahuja@gmail.com)
2023-09-11 17:21:07

Hi everyone, I am a Ph.D. student at The Ohio State University, advised by Prof. Yu Su. I am working on developing species classification models for camera traps using auxiliary information such as taxonomy, location coordinates, etc. I am currently looking to use the lila.science datasets (https://lila.science/datasets) to benchmark my approach. Is there any public benchmark or leaderboard for lila.science datasets? I am aware of the kaggle leaderboard for iwildcam datasets but it doesn't have much information on the methodology of the submission in most cases. Thanks.

Dan Morris (agentmorris@gmail.com)
2023-09-11 17:55:54

*Thread Reply:* To my knowledge, no one has trained a Very Big Model on LILA datasets, so there's not a uniform benchmark. A couple notes that may be useful, though:

  1. I'm increasingly encouraging everyone to think of "all the camera trap data on LILA" as one dataset. It's clearly evolved as a collection of individual datasets, but because it's been mapped to a common taxonomy and because there is a Big Giant .csv file that represents every camera trap image on LILA (both at https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/), you can treat it as one big dataset.
  2. All of the camera trap data on LILA (except for thermal or stereo data) is also available as a Hugging Face dataset (https://huggingface.co/datasets/society-ethics/lila_camera_traps), based on that same Big Giant .csv file, which also may facilitate benchmarking.
  3. I have been trying to keep track of papers that report numbers on individual LILA datasets; look for the "LILA" tag here: https://agentmorris.github.io/camera-trap-ml-survey/#papers-with-summaries... and tell me what I'm missing!
👍 Vardaan Pahuja, Sara Beery, Valentin Gabeff, Fagner Cunha
Dan Morris (agentmorris@gmail.com)
2023-09-11 17:59:50

*Thread Reply:* Also note that there is no precise location information attached to any data on LILA, though where possible, the Big Giant .csv file indicates country/continent/state.

👍 Vardaan Pahuja
Vardaan Pahuja (vardaanpahuja@gmail.com)
2023-09-11 18:01:57

*Thread Reply:* Sounds good, thanks a lot for these resources. I will reach out to you if I come across any papers apart from those mentioned in your repo.

Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-09-12 00:49:47

*Thread Reply:* I have executed three projects, two of which are deployed and running on the field. Please refer to the following research in reverse chronological order:

  1. Hakunamatata - This was a hackathon on Driven Data where the winning model used a fine-trained EfficientNet model on Serengeti dataset. I believe this was around 2019-20
  2. DeCaTron - We have deployed a two-stage model (MDv5 + Shifted Window classifier) in one of our projects in India.
  3. ReWilding Europe - This project was centered on an ambitious initiative to reintroduce wildlife into the forests of Europe. We utilized the Yolov7 backbone to train and subsequently deploy a model targeting 30 distinct species. A primary challenge you may encounter pertains to training for the long-tail distribution, especially in scenarios where data is limited. Additionally, it's crucial to consider how your pipeline addresses and tracks false positives.
👍 Vardaan Pahuja
Devis Tuia (devis.tuia@epfl.ch)
2023-09-14 08:43:26

🚨 Special Issue alert! With @Sara Beery, @Blair Costelloe and @Ruth Oliver we invite you to submit to our new special issue on AI and ecology on Methods in Ecology and Evolution!

Deadline for short white papers on December 1st!

👍 gvanhorn, Oisin Mac Aodha, Cameron Trotter, Lukas Picek, Omiros Pantazis, Benjamin Hoffman, Shir Bar, Vardaan Pahuja, Gustavo Perez, Timm Haucke, Andrew Schulz, Robin Zbinden, Sara Keen, Alexander Merdian-Tarko
🎉 Sara Beery, Katelyn Morrison, Justin Kay, Lukas Picek, Jon Van Oast, Evan Eskew, Taiki Sakai - NOAA Affiliate, Timm Haucke, Robin Zbinden, Valentin Gabeff
❤️ Rowan Converse, Jon Van Oast, Mitchell Rogers, Andrew Schulz, Robin Zbinden
Hamed Alemohammad (h.alemohammad@gmail.com)
2023-09-14 09:46:07

Hi All, we are hiring for two positions at Clark University Center for Geospatial Analytics: 1) Program Director and 2) Geospatial Software Engineer. Please share these with those who might be interested: https://bit.ly/ClarkCGAPositions

👍 Sara Beery
Dan Stowell (dan.stowell@naturalis.nl)
2023-09-15 10:22:30

Hi all. We have 10 funded PhD positions to offer on "Bioacoustic AI"! See https://bioacousticai.eu/ and click on "Apply for a PhD position" to link directly to the currently-open opportunities

Bioacoustic AI
‼️ gvanhorn, Devis Tuia, Sara Beery, Sonny Burniston, Suzanne Stathatos, Oisin Mac Aodha, Carly Batist, Jason Holmberg (Wild Me), Toryn Schafer, Angela Szesciorka, Mikey Tabak, Ben Weinstein, Maddie Cusimano
🎵 Sara Beery, Sonny Burniston, Dan Morris, Jason Holmberg (Wild Me)
❤️ Carly Batist, Jon Van Oast, Jason Holmberg (Wild Me), Viktor Domazetoski, Talia Speaker, Amara McCune, Nicolas Arrieta Larraza, Maddie Cusimano
🐋 Taiki Sakai - NOAA Affiliate, Alexander Merdian-Tarko
🎯 John Martinsson
Amara McCune (amaramccune@gmail.com)
2023-09-16 20:11:18

*Thread Reply:* This looks incredible, thanks for sharing! I was wondering if you know of any opportunities like this that are in the US and open to candidates with a PhD already?

Dan Stowell (dan.stowell@naturalis.nl)
2023-09-19 06:01:12

*Thread Reply:* Hi Amara - I'm no expert on the US scene I'm afraid, but I'm sure there are others here who can give tips!

Dan Stowell (dan.stowell@naturalis.nl)
2023-09-19 07:32:35

*Thread Reply:* Here's a postdoc option at Cornell: https://academicjobsonline.org/ajo/jobs/24693

academicjobsonline.org
Amara McCune (amaramccune@gmail.com)
2023-09-19 13:15:28

*Thread Reply:* Thanks!

Olof Mogren (olof.mogren@ri.se)
2023-09-18 09:18:00

Hi! On Thursday, @Nico Lang from Copenhagen University is speaking on global vegetation monitoring with probabilistic deep learning in our seminar series RISE Learning Machines. The seminar is open to all on Zoom (and in person in Lund, Sweden), but requires registration! 15:00 CET. https://www.ri.se/en/learningmachinesseminars/nico-lang-global-vegetation-monitoring-with-probabilistic-deep-learning

RISE
❤️ Justin Kay, Oisin Mac Aodha, Ben Weinstein, gvanhorn, Kristina Kupferschmidt, Suzanne Stathatos, Sara Beery, Avi Sundaresan, Catarina Silva, Aleksis Pirinen
📆 Shir Bar, Aleksis Pirinen
🌲 Dan Morris, Aleksis Pirinen
Vardaan Pahuja (vardaanpahuja@gmail.com)
2023-09-18 15:33:18

Hi everyone, I have a question regarding the ENA24-detection dataset https://lila.science/datasets/ena24detection available in lila.science. The associated publication "Dynamic Programming Selection of Object Proposals for Sequence-Level Animal Species Classification in the Wild. IEEE Transactions on Circuits and Systems for Video Technology, 2019" is not available in IEEE Xplore or any bibliographic database. There is only a citation in google scholar (the article is missing). I tried to contact the author (Hayder Yousif) but the email is non-existent. If anyone has access to this paper, please let me know. Thanks!

Dan Morris (agentmorris@gmail.com)
2023-09-18 20:00:41

*Thread Reply:* I don't have the paper, and the email address I have for Hayder is his Missouri email (presumably that's the one that no longer works), but Roland Kays is a co-author on the paper and IIRC one of Hayder's co-advisors, so Roland can probably help. I'll DM you with Roland's email address.

Vardaan Pahuja (vardaanpahuja@gmail.com)
2023-09-18 20:01:18

*Thread Reply:* Sounds good, thanks!

Jacob Marks (jamarks13@gmail.com)
2023-09-18 15:53:28

Hey everyone! ☕🏔️📡🧊 One of the AI for Conservation community members, @Nora Gourmelon, published the CaFFe benchmark dataset for tracking glacial calving fronts with SAR imagery. Nora was kind enough to tell me about her research so that I could write a popular science article about it, which was published this morning 🙂

If you're interested glacial calving, computer vision for climate, and supporting your fellow AI for Conservation community members, check out the blog post! https://medium.com/voxel51/caffe-calving-fronts-and-where-to-find-them-1ff57520da45

Medium
Reading time
9 min read
👏 Vincent Christlein, gvanhorn, Amara McCune, Sara Beery, Cameron Trotter, Leopoldo André Dutra Lusquino Filho, Alexander Merdian-Tarko
😎 Jon Van Oast
👏:skin_tone_4: Chris Llorca
❤️ Nora Gourmelon
Ben Weinstein (benweinstein2010@gmail.com)
2023-09-19 06:09:21

I'm sitting in a great talk that I think would be helpful to anyone studying museum specimens and want to mark body regions and extract morphological locations. https://github.com/EchanHe/PhenoLearn . The developer is here at our session.

Stars
3
Language
Python
👀 Elizabeth Campolongo, Yseult Hb, Elie Alhajjar, Shir Bar
❤️ Sara Beery, Rita Pucci
😎 Jon Van Oast, Shir Bar
Sara Beery (sbeery@caltech.edu)
2023-09-19 12:43:24

New NSF Global Center on AI and Biodiversity Change!! Led by @Tanya Berger-Wolf with @Justin Kitzes @David Rolnick Graham Taylor Kaitlin Gaynor Marta Jarzyna Laura Pollock @Oisin Mac Aodha @Devis Tuia @Tilo Burghardt Bernd Meyer and me. Very honored to be a part of this!

Read more: https://twitter.com/sarameghanbeery/status/1704170649281802530?t=KZavdjAKMPESV0E1HQiozw&s=19|https://twitter.com/sarameghanbeery/status/1704170649281802530?t=KZavdjAKMPESV0E1HQiozw&s=19

X (formerly Twitter)
👍 Vardaan Pahuja, Ben Weinstein, Paul Allin, Elie Alhajjar, Nico Lang, Vincent Christlein, Jacob Marks, Anastasia Pagán, gvanhorn, Dan Morris, Olof Mogren, FANQI Z, Ruben Remelgado, Subhransu Maji, Risa Shinoda, Alex Brace, Cameron Trotter, Leopoldo André Dutra Lusquino Filho, Ted Schmitt, Hemal Naik, Robin Zbinden, Maddie Cusimano, Sepand Dyanatkar, Benjamin Tremoulheac, Valentin Gabeff, charlotte, Rebecca Wilks
😎 Jon Van Oast, Ștefan Istrate, Elie Alhajjar, Oisin Mac Aodha, Anastasia Pagán, Casey Youngflesh
❤️ Justin Kay, Bistra Dilkina, Leonardo Viotti, Sowbaranika, Devis Tuia, Rowan Converse, Malte Pedersen, David Rolnick, Declan, Stefan Schneider, Mélisande Teng, Elie Alhajjar, Arjun Subramonian (they/them), Gustavo Perez, Amara McCune, Oisin Mac Aodha, Jacob Marks, Elizabeth Campolongo, Anastasia Pagán, Viktor Domazetoski, Talia Speaker, Shir Bar, Risa Shinoda, aruna, Alex Brace, Lukas Picek, Omiros Pantazis, Andrew Schulz, Ted Schmitt, Robin Zbinden, Valentin Ștefan, Olivier Dietrich, Ronan Wallace
🙌 Shir Bar, Carly Batist, Jenna Kline, Evan Eskew, Omiros Pantazis, Carl Boettiger, Ruth Oliver, Yseult Hb, Ronan Wallace
🎉 Matt Weldy, Ruth Oliver, Shir Bar, charlotte, Urs, Alexander Merdian-Tarko
Bistra Dilkina (dilkina@usc.edu)
2023-09-19 12:45:39

*Thread Reply:* This is just amazing news!!! Looking forward to what this initiative brings to our field!

❤️ Sara Beery
😍 Sara Beery, David Rolnick
Arjun Subramonian (they/them) (arjun.subramonian@gmail.com)
2023-09-19 13:08:03

*Thread Reply:* Congrats 💜

Carly Batist (cbatist@gradcenter.cuny.edu)
2023-09-20 07:56:18

Really excited to announce a new white paper from Rainforest Connection (RFCx): Harnessing the Power of Sound & AI to track Global Biodiversity Framework (GBF) Targets.

The paper explores the power of ecoacoustics and AI to monitor biodiversity and track progress towards GBF targets using case studies from around the world. 🔊🌳

Read it here ➡️ https://rfcx.org/publications/harnessing-the-power-of-sound-and-ai-to-track-global-biodiversity-framework-gbf-targets

👍 gvanhorn, Justin Kay, Dan Morris, Georgia Atkinson, Leopoldo André Dutra Lusquino Filho, Viktor Domazetoski, Dante Wasmuht, Yseult Hb, Marconi Campos
🔈 Suzanne Stathatos, Rowan Converse, Amee Assad, Marconi Campos
🎉 Eric Greenlee, Gracie Ermi, Talia Speaker, charlotte, Marconi Campos
😎 Jon Van Oast, Marconi Campos
Leopoldo André Dutra Lusquino Filho (leopoldo.lusquino@unesp.br)
2023-09-20 10:35:32

Hello everyone, We are in the process of establishing a Green AI center in Brazil dedicated to climate research, named GRAIN. This center comprises three of Brazil's leading universities: the Federal University of Rio de Janeiro, São Paulo State University, and the Federal Fluminense University. Currently, GRAIN brings together 12 faculty members from various disciplines, including Computer Science, Electrical Engineering, Environmental Sciences, Meteorology, and Health, as well as over 60 undergraduate students and 20 PhD students. The primary focus of our research group will be a large-scale project aimed at developing techniques for predicting extreme weather events, both forecasting and nowcasting. Additionally, we will work on the creation of Digital Twins for monitoring hydrodynamic reservoirs using Green ML, TinyML, Deep Learning, Weightless Neural Networks, Hyperdimensional Computing, IoT, Data Fusion, and Physics-informed Neural Networks. For this project, we are fortunate to have the support of the Brazilian Navy, the São Paulo State Watershed Committee, and the AI Hubs in the states of Rio de Janeiro and São Paulo. This is a long-term project, and we are actively seeking international collaborations. If you are affiliated with a research center whose work aligns with ours and are interested in establishing collaborations, please do not hesitate to reach out to me at leopoldo.lusquino@unesp.br.

❤️ Jon Van Oast, Sara Beery, Hemal Naik, Yseult Hb, Philippe Hermant, Carly Batist
🌎 Sara Beery
Peter van Lunteren (contact@pvanlunteren.com)
2023-09-21 06:44:04

Hi everyone 👋

I would like to introduce my recently founded company, Addax Data Science. 🚀 At Addax, we have the simple mission of providing tools that enable ecologists to spend less time on boring tasks and more time on conservation. Think statistics, software, automation, and custom identification models for bird vocalizations or camera trap images. Visit the website for more information: https://addaxdatascience.com/

Do you have any questions or want to know more? Don't hesitate to get in touch!

Addax Data Science
🙌 Carly Batist, Sara Beery, Dan Morris, Timm Haucke, Yseult Hb, Marconi Campos
Jan Kees (jankees.schakel@sensingclues.org)
2023-09-27 07:04:11

*Thread Reply:* congrats Peter 🙂

😁 Peter van Lunteren
Sara Beery (sbeery@caltech.edu)
2023-09-21 07:26:01

Who is going to ICCV? Or who is local in Paris?

I'd love to do another bird walk!

🐦 Oisin Mac Aodha, Risa Shinoda, Katelyn Morrison, Subhransu Maji, Shir Bar, Caleb Robinson, Yseult Hb, Joakim Bruslund Haurum, Peter Kulits
Sara Beery (sbeery@caltech.edu)
2023-09-21 07:29:29

*Thread Reply:* I've heard reports of several research collaborations getting started at the CVPR bird walk, which is awesome 🎊

Oisin Mac Aodha (macaodha@caltech.edu)
2023-09-21 07:30:17

*Thread Reply:* I'm going and would love to join!

❤️ Sara Beery
Risa Shinoda (shinoda.lisa.47z@gmail.com)
2023-09-21 08:09:53

*Thread Reply:* I’ll be there and I’d love to join in! 🐥

❤️ Sara Beery
Devis Tuia (devis.tuia@epfl.ch)
2023-09-21 08:31:28

*Thread Reply:* unfortunately not 😞

Sara Beery (sbeery@caltech.edu)
2023-09-21 08:32:20

*Thread Reply:* @Devis Tuia as if you would wake up early 😂😂

Devis Tuia (devis.tuia@epfl.ch)
2023-09-21 08:35:11

*Thread Reply:* it’s in my DNA. But I was annoucning that sadly I wont’ be in Paris at all

😞 Sara Beery, Oisin Mac Aodha
Jonathan Roberts (jdr53@cam.ac.uk)
2023-09-21 08:55:42

*Thread Reply:* I’m also going and would be keen to join 🙂

❤️ Sara Beery
Diego Marcos (diego.marcos.gonzalez@gmail.com)
2023-09-21 11:15:05

*Thread Reply:* Working with the Pl@ntNet people I should be proposing a plant walk, but I'd be happy to join for the birds 😅

😍 Sara Beery
Caleb Robinson (calebrob6@gmail.com)
2023-09-21 12:40:52

*Thread Reply:* I'll be there and would love to come too!

😍 Sara Beery
Sara Beery (sbeery@caltech.edu)
2023-09-21 12:42:12

*Thread Reply:* We can look at plants too!!!!

😄 Diego Marcos
Thomas Radinger (thomasrad@protonmail.com)
2023-09-22 03:07:54

*Thread Reply:* Local in Paris and would love to join as well for bids and plants :)

❤️ Sara Beery
😍 Sara Beery
Sara Beery (sbeery@caltech.edu)
2023-09-27 18:45:41

*Thread Reply:* https://twitter.com/sarameghanbeery/status/1707164338769875380?t=UVWoUdHgoeoTkrxtERseQ&s=19|https://twitter.com/sarameghanbeery/status/1707164338769875380?t=UVWoUdHgoeoTkrxtERseQ&s=19

X (formerly Twitter)
😍 Jon Van Oast, Oisin Mac Aodha
🥐 Suzanne Stathatos
🦢 Jonathan Roberts
🙌 Thomas Radinger
👍 Thor Veen
Sara Beery (sbeery@caltech.edu)
2023-10-04 10:16:56

*Thread Reply:* https://twitter.com/sarameghanbeery/status/1709547363012948263?t=0j3QXp5lmzmHsqLf6tGHKQ&s=19|https://twitter.com/sarameghanbeery/status/1709547363012948263?t=0j3QXp5lmzmHsqLf6tGHKQ&s=19

X (formerly Twitter)
🥖 Jon Van Oast, Frederic
❤️ Jon Van Oast
🪶 Shir Bar
🥐 Diego Marcos
Barbie D (barbara.raven42@gmail.com)
2023-09-21 11:20:54

https://www.smithsonianmag.com/science-nature/four-amazing-impacts-of-this-ai-powered-bird-migration-tracker-180982932/

This is so cool!! Thought I’d share this article about BirdCast (1) birdCast can help target city wide lights-out campaigns to correspond with high migration time (2) New York city audubon uses it to predict number of dead birds they’re gonna see and plan accordingly (3) birds become “trapped” in the 9/11 tribute lights, and volunteers monitor that and turns it off when too many birds are trapped (4) you can see speed, direction, and altitude of migrations!! (5) it can warn farmers when to be on high alert for avian flu

Smithsonian Magazine
😮 Jon Van Oast, Eric Greenlee
👍 gvanhorn, Sara Beery, Dan Morris, Viktor Domazetoski
Vardaan Pahuja (vardaanpahuja@gmail.com)
2023-09-24 01:16:18

Hi everyone, I have a question regarding the MegaDetector detections available for lila.science. Some datasets (e.g. Orinoquía Camera Traps) have multiple animals annotated per image in some cases. Corresponding to these, the MegaDetector also provides multiple detections with high confidence. Is there a way to may the detections to respective categories? The MegaDetector categories are just animal, person, and vehicle, so they are not useful in this case.

LILA BC
Written by
lilawp
Est. reading time
3 minutes
Dan Morris (agentmorris@gmail.com)
2023-09-24 20:58:39

*Thread Reply:* There's not a way to do this reliably, sorry. If you are, for example, combining those MegaDetector boxes with image-level labels to train a multiclass detector, you will likely have to either (a) ignore images with multiple species or (b) manually assign labels to boxes for those images. FWIW, outside of the African datasets, multi-species images are extremely rare on LILA, so you probably don't lose a lot of information if you discard multi-species images in, for example, Orinoquia Camera Traps, and/or it would be super-duper fast to manually assign labels for the relatively small number of boxes this affects.

If you are working with the Snapshot Safari data (where multi-species images are common enough to possibly want a solution), you could run a classifier trained on cropped animals on each box in each of those images, and I think 99.9% of the time, even if the classifier isn't great, you would be able to assign boxes to labels (e.g. if you had zebras and warthogs in the same image, you only really need to rely on the classifier to assign a higher probability to zebra than warthog or vice-versa to help you assign those boxes). But even there, I think it would be faster to do it manually.

Dan Morris (agentmorris@gmail.com)
2023-09-24 21:00:15

*Thread Reply:* Actually if you are training a multiclass detector, the most elegant solution is probably to leave those out, train an initial detector, use that detector to assign labels to that small set of boxes, manually clean up what should now be a very small number of mistakes, and re-train.

Vardaan Pahuja (vardaanpahuja@gmail.com)
2023-09-24 21:36:34

*Thread Reply:* Sounds good, thanks for the information!

Olof Mogren (olof.mogren@ri.se)
2023-09-25 02:20:46

Amazing seminar last Thursday with @Nico Lang in our Learning Machines series. Thanks @Sara Beery, @Ben Weinstein, @Alexander MathisEPFL and others for attending in-person and contributing to a great event.

We have created a new youtube playlist with all our environment-related talks. Check it out!

https://www.youtube.com/playlist?list=PLqLiVcF3GKy0-jZFGg-VqLzh51LqCfduN

November 2, Pria Donti is visiting our seminar! More info: https://ri.se/lm-sem

YouTube
RISE
👍 Oisin Mac Aodha, Valentin Gabeff, Shir Bar, Ștefan Istrate, Paul Melki, gvanhorn, Elie Alhajjar, Declan, Carly Batist, Ben Weinstein, Olivier Dietrich, Dario Prifti, Alexander Merdian-Tarko, stefano puliti
🙌 John Martinsson, Justin Kay, Elie Alhajjar, Suzanne Stathatos
❤️ Sara Beery, Oskar Åström, Nico Lang
Matt Weldy (matthewjweldy@gmail.com)
2023-09-25 14:41:57

I am currently preparing for my PhD comprehensive exams and I am digging through a wide range of Ecology and Methods literature; however, I wanted to reach out to see if anyone had any reading (book, paper, other) recommendations relevant to the interests of this community (AI in conservation? In other words what reading lists have you/would you recommended to PhD students interested in ecology, AI, and analysis?

Here is a simple google drive sheet if anyone has any recommendations. https://docs.google.com/spreadsheets/d/1F7dE7UUYZcU4Rx6tsg4QTgoUcYTqYMh_Kugt0jnzWqA/edit?usp=sharing

Thank you!

😍 Sara Beery, Nanticha Ocharoenchai (Lyn)
🙏 Sara Beery
🎉 Jon Van Oast, Shir Bar
Peter van Lunteren (contact@pvanlunteren.com)
2023-09-25 14:46:16

*Thread Reply:* Sorry, because I’m on my phone I can’t edit your google docs file. I thought this book was an interesting read: AI in the Wild - Sustainability in the Age of Artificial Intelligence - By Peter Dauvergne

👍 Matt Weldy
Matt Weldy (matthewjweldy@gmail.com)
2023-09-25 14:51:56

*Thread Reply:* Thanks! Feel free to add here and I'll move them to the sheet.

Titus (titus@colossal.com)
2023-09-25 16:02:43

*Thread Reply:* We recently released a review on AI for elephant monitoring https://arxiv.org/abs/2306.13803

arXiv.org
👍 Matt Weldy
Vardaan Pahuja (vardaanpahuja@gmail.com)
2023-09-25 21:41:47

Hi, I am unable to download the annotations for Missouri camera traps in LILA https://lila.science/datasets/missouricameratraps. The annotations link hosted by LILA is non-functional https://lilaannex.blob.core.windows.net/lila-annex/missouri_camera_traps_set1.zip Any help is appreciated.

LILA BC
Written by
lilawp
Est. reading time
3 minutes
Dan Morris (agentmorris@gmail.com)
2023-09-25 23:20:59

*Thread Reply:* Good catch! Should be fixed now (new link). The issue was... no, actually the issue is really boring, but it should have only impacted this particular file. Let me know if anything looks off, the metadata for this dataset has been... complicated (see the note on the page about some bounding boxes that aren't quite right).

🎉 Jon Van Oast, Sara Beery
Vardaan Pahuja (vardaanpahuja@gmail.com)
2023-09-25 23:31:45

*Thread Reply:* thanks, it works now!

Vardaan Pahuja (vardaanpahuja@gmail.com)
2023-09-28 12:26:52

*Thread Reply:* The page says that bounding boxes provided by the authors of Missouri camera traps dataset aren't accurate. How about the megadetector results available on LILA? Are those reliable to be used instead?

Dan Morris (agentmorris@gmail.com)
2023-09-29 18:37:10

*Thread Reply:* It's not that the bounding boxes aren't accurate, rather that for 79 images (there is a list on the page), only one bounding box is available, when in fact there are multiple animals in the image. So if you are using the dataset to train a detector, I would leave out those 79 images. If you are cropping the boxes and using those crops to train a classifier, they're all fine.

But yes, I also expect that the MDv5 results for this dataset are good too.

👍 Vardaan Pahuja
Vardaan Pahuja (vardaanpahuja@gmail.com)
2023-10-02 00:03:38

*Thread Reply:* Hi, while processing the LILA datasets, I found that for most datasets, the annotation images and the images available for download correspond to slightly different sets of images. Just want to make sure there is no processing issue at my end.

Dan Morris (agentmorris@gmail.com)
2023-10-02 14:18:15

*Thread Reply:* The only difference you should see is that images with humans are in the metadata, but not in the actual image folders. That would also include images labeled, e.g. "vehicle", and in some cases "domestic dog" or even "horse". When posting datasets to LILA, we avoid posting images that even might have people in them.

If you see a significant number of images in the metadata that are not labeled human and are not present in the cloud bucket(s) or zipfile(s), like missing images of lions or bobcats or whatever, please let me know at info@lila.science. Thanks for checking everything carefully!

👍 Vardaan Pahuja
Drea Burbank (drea@savimbo.com)
2023-09-26 22:37:07

We might have the world’s first certified biodiversity credit. We could really use some open-source coders to help with this. isbm.savimbo.com

https://www.savimbo.com/biodiversity

Savimbo
👍 Jose Ruiz-Munoz
👍:skin_tone_4: Chris Llorca
😎 Jon Van Oast
Casey Clifton (caseyclifton@proton.me)
2023-11-26 02:28:05

*Thread Reply:* Hi Drea, this is really interesting and something ive been trying to research lately. Can you point me at any papers discussing the extend to which indicator species can proxy broader biodiversity and ecosystem health?

Carly Batist (cbatist@gradcenter.cuny.edu)
2023-09-28 09:19:18

Cool use of camera-traps for non-wildlife insights! https://x.com/jamiealison30/status/1706953803474387324?s=20

X (formerly Twitter)
👍 Justin Kay, mimi, Dan Morris, Eric Greenlee, Ștefan Istrate, Shir Bar, Sara Beery, Valentin Gabeff, Aakash Gupta
❤️ Jamie Alison
Justin Kay (justinkay92@gmail.com)
2023-09-28 09:21:06

*Thread Reply:* @Katie Breen

❤️ Sara Beery
Katie Breen (cbreen@uw.edu)
2023-10-18 23:08:04

*Thread Reply:* I love this!! I am catching up on all my missed slacks and this looks awesome. Thank you for sharing Carly 🙂 !! And for tagging me Justin 🙂

👍 Carly Batist
stefano puliti (stefano.puliti@nibio.no)
2023-09-28 14:57:31

For those of you interested in 🌲 and lasers 🚨 here is a new ML-ready benchmark dataset we recently published. It's called FOR-instance and is composed of manually segmented trees in 3D forest point cloud scenes. IT can be used for instance (trees) and semantic (tree components) segmentation.

data paper: https://arxiv.org/abs/2309.01279 data: https://zenodo.org/record/8287792

Zenodo
🙌 Joe Ferdinando, Suzanne Stathatos, Ben Weinstein, Emily Lines, Enis Berk Çoban, Nico Lang, Dan Morris, Emilio Luz-Ricca, Antonio Ferraz
👍 Caterina Barrasso
stefano puliti (stefano.puliti@nibio.no)
2023-09-28 14:58:36
Dan Morris (agentmorris@gmail.com)
2023-10-02 14:35:12

New dataset on LILA, courtesy of USGS:

https://lila.science/datasets/izembek-lagoon-waterfowl/

This dataset contains over 500k points (with species-level labels) on waterfowl in aerial images from Alaska. It's a subset of a larger dataset that's also public (https://alaska.usgs.gov/products/data.php?dataid=484), and usually I don't host datasets on LILA that already exist somewhere else, but in this case (a) the original dataset is massive, very difficult to download from ScienceBase, and contains ~95% empty images, and (b) a bunch of work went into aligning the original metadata with the images and converting to a standard format, and I didn't want everyone else who works with this dataset to have to repeat that work.

And I have a specific request for anyone who wants to tinker with this dataset!

tl;dr: it would be great if someone could train YOLOv8x on this data and compare to the YOLOv5x6 model we've already trained.

Long version...

We've trained a YOLOv5 (specifically YOLOv5x6) detector on this data that USGS and US Fish & Wildlife are happy with, so the project is in a good stable state (https://github.com/agentmorris/usgs-geese). But the decision to use YOLOv5 rather than the newer, better YOLOv8 was mostly based on the fact that YOLOv5x6 has a 1280-pixel input size, and there isn't (yet) a 1280-pixel version of YOLOv8. But there's not really a reason not to break everything into 640px patches (instead of 1280px) and use YOLOv8x. It would require no new code to train YOLOv8 now, because all the training was done through the YOLOv5 CLI (which is almost identical to the YOLOv8 CLI); it would just be some environment setup and job management and GPU time. So, if someone really wants to get your detector on, let me know.

Everyone place your bets on this thread about whether using a newer model that will require a little more inference time due to the smaller patch size (YOLOv8x is like 10% faster per patch than YOLOv5x6, but will require ~4x as many patches) will work better, or whether it won't really make a difference. You can see a bunch of sample patches and sample results at the GitHub link above to formulate your opinion.

👍 Aamir Ahmad, Ben Weinstein, Vincent Christlein, Devis Tuia, Fadel, Sam Lapp
🦆 Rowan Converse, Sara Beery, Emilio Luz-Ricca
‼️ gvanhorn
😎 Jon Van Oast
Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-10-03 03:54:55

*Thread Reply:* we could give YOLOv8 a try on this... just need a bit of time (after IROS this week and our next field trip.. we are back in the office on 24th Oct)

Dan Morris (agentmorris@gmail.com)
2023-10-03 09:12:15

*Thread Reply:* SG, drop me an email when you're back (agentmorris@gmail.com), and I'll send some more information.

👍 Aamir Ahmad
Dan Morris (agentmorris@gmail.com)
2023-10-02 17:09:10

There was a thread here a few months ago about bounding box annotation tools; @Aakash Gupta maintains a useful list here.

Since then I tried a bunch of tools for the specific scenario where an AI model generates boxes that are mostly sensible, but need a bunch of cleanup. I found that most tools - especially Label Studio, which seems to be the big-and-generally-quite-good gorilla in this space - did not make this super-easy unless you were completely bought in to the tool's built-in ML pipeline, but labelme came the closest, because it's fast and simple, with a simple file format, and the code is simple enough to be easy to modify, easy enough that it's efficient to make disposable versions of the tool for specific tasks.

I ended up forking labelme to add some minor features that end up being a win for my specific scenario, like keyboard adjustment of box boundaries, and keyboard control of box selection. Also the ability to make absurdly large, bright boxes, because when everything is working well, you're just banging the "next" button as fast as you can and it's really helpful to have super-salient boxes.

Everything I did is a total hack and crashes randomly and I don't think this fork will be useful to anyone, but I am curious about the features that other folks doing similar work found were necessary for AI-assisted annotation that other tools were missing. I.e., the previous thread was really about which tools are "good", now I'm curious about what features are missing in existing annotation tools for everyone's conservation dataset tasks.

FWIW I found BoundingBoxEditor to be the best tool for validating and previewing YOLO-formatted boxes, as a final consistency check before training with a framework that expects YOLO-formatted annotations.

👍 Aakash Gupta, Akash Jaiswal
Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-10-03 03:53:41

*Thread Reply:* Perhaps https://github.com/robot-perception-group/smarter-labelme?search=1 could be of help.. we built it on top of Labelme. We further adapted it for behavior annotations here https://github.com/robot-perception-group/animal-behaviour-inference (perhaps that thread you meant was about out smarterlabelme tool when we released it). A short summary of what we do in this tool is tracking (predict-detect loop like (but not exactly) a Kalman filter) the bounding boxes. Both predicter and detector are DNNs. A human in the loop fixes small deviations if any. We see almost 10-fold decrease in time spent annotating/correcting.

Gracie Ermi (gracieermiifthen@gmail.com)
2023-10-02 17:13:42

The company I work for, Impact Observatory, has started offering the ability to more easily download our free annual land use and land cover maps clipped to any area of interest for any year from 2017-2022.

Our global maps were already open access, but now you can order them through our store and get your desired data emailed to you already clipped to your area of interest along with various metrics about your data. Thought this might be helpful or interesting to some folks in this group! https://x.com/ImpactObserv/status/1702684441271804004?s=20

X (formerly Twitter)
👍 Dan Morris, Carly Batist
👀 Elizabeth Campolongo
🎉 Jon Van Oast, Carly Batist
Dan Morris (agentmorris@gmail.com)
2023-10-02 17:18:34

*Thread Reply:* I just placed a test order for a small area around where I live to try out the tool; the process was very slick!

Gracie Ermi (gracieermiifthen@gmail.com)
2023-10-02 17:33:43

*Thread Reply:* Glad to hear it @Dan Morris!

Paul Allin (allinpaul@gmail.com)
2023-10-25 06:08:05

*Thread Reply:* Cool product! Is there some way of saving the polygon or getting multiple years in one order?

Gracie Ermi (gracieermiifthen@gmail.com)
2023-11-03 14:48:51

*Thread Reply:* Thanks, @Paul Allin! Using the same custom polygon for multiple orders is a much-requested feature, and one that we definitely want to offer soon!

Chris Yeh (chrisyeh96@gmail.com)
2023-10-02 17:26:47

🚨 Last call for submissions to the 2023 NeurIPS Workshop on Computational Sustainability! https://www.compsust.net/compsust-2023/

  • Submission Deadline: Oct 3, 2023 (AOE)
  • Notification of Acceptance: Oct 21, 2023
  • Workshop: December 15th, 2023; New Orleans, Louisiana

We hope to see some submissions from the AI for Conservation community!

Feel free to message me or contact neurips-workshop2023@compsust.net with any questions.

} Katelyn Morrison (https://aiforconservation.slack.com/team/U04A34D3673)
🙌 Patrick Beukema
Dan Morris (agentmorris@gmail.com)
2023-10-02 17:34:09

*Thread Reply:* The negative results focus is a great idea. Negative ML results in the literal sense of low accuracy are often a little tricky because it can be impossible to say whether the results were fundamental or just related to implementation issues. But negative results where the ML worked (in the sense of high accuracy), but the model still didn't end up being useful (because the test domain surprised you, or you built a model you thought the user wanted but it wasn't actually what they wanted, or the user thought 91% accuracy was good enough but it turns out they needed 94% accuracy, or whatever)... those are super-interesting! Excited to see what submissions you get in the "negative results" department.

👍 Chris Yeh, Cameron Trotter, Yuanqi Du, Vardaan Pahuja, Sara Beery, Ștefan Istrate, Paul Melki, Katelyn Morrison, Justin Kay, Sepand Dyanatkar
❤️ Suzanne Stathatos, Sara Beery, Katelyn Morrison
Sara Beery (sbeery@caltech.edu)
2023-10-03 09:50:26

Amazing slide from @Caleb Robinsons talk at the HADR workshop 😂😂😅😅😅

👍 Jonathan Roberts, Olivier Dietrich
👏 Esther Rolf, Devis Tuia, Dan Morris, Mark Goldwater, Taiki Sakai - NOAA Affiliate, Emilio Luz-Ricca, Katelyn Morrison
🙃 Chris Yeh
Sako Arts (sako@fruitpunch.ai)
2023-10-03 10:22:10

FruitPunch AI for Turtles Challenge coming up! 🌊🐢 Dive into the world of Sea Turtle Conservation! 🐢🌊

We're thrilled to announce the 𝐀𝐈 𝐟𝐨𝐫 𝐓𝐮𝐫𝐭𝐥𝐞 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞! 🐢🤖

Our goal is to develop cutting-edge computer vision software that can recognize and distinguish individual turtles through automated identification. Current turtle monitoring methods are intrusive, time-consuming, and expensive, posing challenges to organizations dedicated to protecting these endangered creatures.

How it Works: Sea turtles have unique facial scales, much like human fingerprints. AI can identify these distinct characteristics, helping researchers associate them with individual turtles. In this Challenge, we partner up with @ Sea Turtle Conservation Bonaire, an amazing organization committed to protecting these precious animals and their habitat.

In this challenge, we aim to: • Explore Siamese and Triplet networks for turtle identification. • Investigate state-of-the-art techniques like 𝐀𝐫𝐜𝐅𝐚𝐜𝐞. • Develop data pipelines for processing training and testing sets. • Create an easy-to-use GUI in collaboration with researchers for efficient turtle identification. We're seeking passionate individuals from various backgrounds, including AI/ML enthusiasts, software developers, biologists/conservationists interested in AI, and computer scientists. We expect a commitment of ~8 hours per week. You will be joining weekly meetings, masterclasses, and presentations.

Join us in making a real impact on turtle conservation in Bonaire and contribute to preserving these incredible creatures! 🌊🐢

Learn more about the challenge on our website: 👉 https://www.fruitpunch.ai/challenges/ai-for-turtles

fruitpunch.ai
🎉 Carly Batist, Dan Morris, Cameron Trotter, Jon Van Oast, Katelyn Morrison, Sara Beery, Ronan Wallace
🐢 Cameron Trotter, Yseult Hb, Elizabeth Campolongo, Gracie Ermi, Mitchell Rogers, Katelyn Morrison, Sako Arts, Sara Beery, Ronan Wallace
Elizabeth Campolongo (e.campolongo479@gmail.com)
2023-10-03 11:25:43

*Thread Reply:* Do the datasets for these challenges get published somewhere afterwards? I couldn't find a link to data on past challenges on the website.

Katelyn Morrison (kcmorris@andrew.cmu.edu)
2023-10-04 00:39:26

*Thread Reply:* Have you chatted with the ML engineers at WildMe yet? @Jason Holmberg (Wild Me) and his team may have some good insights on this challenge 🙃

Sako Arts (sako@fruitpunch.ai)
2023-10-04 09:07:39

*Thread Reply:* Hi Elizabeth, it depends on the Challenge Owner who provide the data if we can publish their data. If we can we always publish it on our platform but for many cases we sadly cannot. In this case we have 3 datasets, two of those are from the Caribbeans, they will only be made available to challenge participants for now, the last dataset is a public data set from South Africa.

Sako Arts (sako@fruitpunch.ai)
2023-10-04 09:16:43

*Thread Reply:* @Katelyn Morrison we have most definitely been in touch, mostly regarding our animal classification cases, not yet on an identification case, though I'd be eager to collaborate. I see they already have a Internet of Turtles project

❤️ Katelyn Morrison
Sako Arts (sako@fruitpunch.ai)
2023-10-04 09:17:39

*Thread Reply:* @Jason Holmberg (Wild Me) I read you use a SIFT based method. I'd be interested to hear about your experiences with this algorithm!

❤️ Katelyn Morrison
Urs (urs.waldmann@uni-konstanz.de)
2023-10-05 04:00:14

*Thread Reply:* Hi Sako, what kind of annotations does the public dataset from South Africa contain? Can you post a link to this public dataset, please? Thanks, Urs

Sako Arts (sako@fruitpunch.ai)
2023-10-05 04:23:17

*Thread Reply:* Hi Urs, You can find the data and the explanation of the labels here: https://zindi.africa/competitions/turtle-recall-conservation-challenge/data

Urs (urs.waldmann@uni-konstanz.de)
2023-10-05 04:31:25

*Thread Reply:* Tnhanks, @Sako Arts

Casey Clifton (caseyclifton@proton.me)
2023-11-26 02:23:44

*Thread Reply:* Hey @Sako Arts, I've been approached to advise a sea turtle citizen science startup on AI methods for individual ID of sea turtles. Wondering if you have any suggestions on open source models/techniques I should be looking at? Cheers 🙂

Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2023-11-26 12:31:31

*Thread Reply:* @Casey Clifton ArcFace loss + EfficientNet is our state of the art for general re-ID. Turtles are on our roadmap (data from iot.wildbook.org), but it might be Q1 2024 before we have time to train.

https://github.com/WildMeOrg/wbia-plugin-miew-id

Stars
1
Language
Python
🙌 Casey Clifton
Casey Clifton (caseyclifton@proton.me)
2023-11-26 18:35:00

*Thread Reply:* Thanks @Jason Holmberg (Wild Me)!

Carly Batist (cbatist@gradcenter.cuny.edu)
2023-10-04 18:52:29

Anyone else going to the GEOBON Monitoring Biodiversity for Action conference in Montreal next week?! 😃

❤️ Jon Van Oast, Timm Haucke, Sara Beery
👋 Timm Haucke, Benjamin Kellenberger, Ruben Remelgado, Justine Boulent
Sara Beery (sbeery@caltech.edu)
2023-10-05 07:46:01

*Thread Reply:* @Timm Haucke will be representing our group there!

🎉 Carly Batist
Mélisande Teng (tengmeli@mila.quebec)
2023-10-06 11:46:07

*Thread Reply:* I will also be attending!

👋 Timm Haucke, Carly Batist
🎉 Carly Batist, Justine Boulent
Peter van Lunteren (contact@pvanlunteren.com)
2023-10-11 11:34:55

👋 Hi, I’d like to ask for some advice about training a single-class species detector. I’m training a YOLOv8 classification model to be used in conjunction with MegaDetector 5. At the moment, I’m only interested in one class: lion. The problem with one class is that the built-in validation of YOLOv8 doesn’t work, since all images are always classified as the only possibility, namely lion. Does anyone have experience with this? It seems to me that the YOLOv8 architecture doesn’t really support one-class training. Do you recommend adding a second class non-lion with representative other animals from the ecosystem (e.g. giraffe, zebra, elephant, etc.)? Or is it better to create custom test and validation functions that incorporate some kind of confidence threshold? Thanks!

Titus (titus@colossal.com)
2023-10-11 11:36:09

*Thread Reply:* Yes, you have to have an alternative option like non-lion. Otherwise your attempting to train lion vs NULL. That’s the easiest route at least.

🙏 Peter van Lunteren
Sara Beery (sbeery@caltech.edu)
2023-10-11 11:39:38

*Thread Reply:* Is the idea to classify crops from megadetector or to train a separate lion-only detector? If it's the second, and you have bbox labels, then it should use the background not in the lion boxes as the negative class.

👍 Titus
🙏 Peter van Lunteren
Peter van Lunteren (contact@pvanlunteren.com)
2023-10-11 13:40:01

*Thread Reply:* Great 🙂 Yes, it will classify crops from megadetector. I'll go and find some non-lion crops to add a second class!

Different scenario: if I wanted to train a classifier for lion and zebra , do you recommend to add a third class other? Or could I in this case use the confidence values to decide which animals are not lion nor zebra ? For example, if the crops from megadetector don't get a classification for either lion or zebra above the threshold value X, then it'll be a classified as other.

Dan Morris (agentmorris@gmail.com)
2023-10-12 11:23:04

*Thread Reply:* This is one of those questions where I think you get a free PhD in computer science if you can provide a universal answer. :) I'm assuming you have labels for a bunch of other species too, e.g. you're working with Snapshot Safari data, or more generally, it's likely that whatever your training data is, it's not only lion and zebra that were labeled. Even if you're only interested in lions and zebras, I'm 71% confident that you want to include labels for at least the other common classes as well, but you probably don't want to use so many training examples that they overwhelm lions or zebras. E.g. if you're in an ecosystem where wildebeest and elephants are also common, you could (a) lump them into an "other" class or (b) give them their own classes, and maybe lump the less common animals into "other". I'm 71% confident your performance on lions and zebras will go up if you choose (b), but I'm not sure. And it's hard to say how far down the list of classes you want to go before you start lumping things into "other". If you have 10 species that are as common as lions, I think you want them all to be their own classes, but I'm not sure. Let us know what you learn!

👀 Steve Haddock
🙏 Peter van Lunteren
😀 Alan Stenhouse
Chuck Stewart (cvstewart@gmail.com)
2023-10-12 11:29:49

*Thread Reply:* ❤️ I really like this answer.

➕ Sara Beery
Peter van Lunteren (contact@pvanlunteren.com)
2023-10-13 07:54:04

*Thread Reply:* I was hoping for one of those answers with which you'll get a free PhD... 😁 Thanks guys! That makes a lot of sense 🚀

Patrick Beukema (patrickb@allenai.org)
2023-10-11 17:00:22

Hey all, We came across a strange artifact when we were evaluating one of our newer models in a sentinel-2 image. It looks like confetti separation of the RGB channels (shown in the TCI imagery below). We don’t know what this is — and a quick G search did not yield anything relevant. Any ideas? Has anyone seen this before?

Caleb Robinson (calebrob6@gmail.com)
2023-10-11 17:21:48

*Thread Reply:* And a link to look around on the PC https://planetarycomputer.microsoft.com/explore?c=110.6857%2C7.9889&z=9.99&v=2&d=sentinel-2-l2a&m=Most+recent+%28any+cloud+cover%29&r=Natural+color&s=false%3A%3A100%3A%3Atrue&sr=desc&ae=0|https://planetarycomputer.microsoft.com/explore?c=110.6857%2C7.9889&z=9.99&v=2&d=sentinel-2-[…]Natural+color&s=false%3A%3A100%3A%3Atrue&sr=desc&ae=0

planetarycomputer.microsoft.com
Patrick Beukema (patrickb@allenai.org)
2023-10-11 17:22:42

*Thread Reply:* there is separation there as well

Patrick Beukema (patrickb@allenai.org)
2023-10-11 17:22:47

*Thread Reply:* https://planetarycomputer.microsoft.com/explore?c=111.0984%2C7.9519&z=13.54&v=2&d=sentinel-2-l2a&m=Most+recent+%28any+cloud+cover%29&r=Natural+color&s=false%3A%3A100%3A%3Atrue&sr=desc&ae=0|https://planetarycomputer.microsoft.com/explore?c=111.0984%2C7.9519&z=13.54&v=2&d=sentinel-2[…]Natural+color&s=false%3A%3A100%3A%3Atrue&sr=desc&ae=0

planetarycomputer.microsoft.com
Patrick Beukema (patrickb@allenai.org)
2023-10-11 17:22:51

*Thread Reply:* nice viewer by the way

❤️ Caleb Robinson
Caleb Robinson (calebrob6@gmail.com)
2023-10-11 17:29:55

*Thread Reply:* 1000% agree, but I can't take any credit for it:)

Patrick Beukema (patrickb@allenai.org)
2023-10-11 17:31:09

*Thread Reply:* yeah its super nice — responsive, fast, does exactly what you think it should do with zero need to read documentation

Nathan Jacobs (jacobsn@wustl.edu)
2023-10-11 17:55:52

*Thread Reply:* That looks like the combination of specular reflections and different acquisition times / angles. Some additional info: https://gis.stackexchange.com/questions/332634/water-reflectance-effect-on-sentinel-2-images

Geographic Information Systems Stack Exchange
🙌 Patrick Beukema, Dan Morris
Patrick Beukema (patrickb@allenai.org)
2023-10-11 18:25:25

*Thread Reply:* wow. That makes total sense. Thank you for responding.

Patrick Beukema (patrickb@allenai.org)
2023-10-11 18:48:27

*Thread Reply:* @Nathan Jacobs your explanation makes sense but also I would expect to see this effect more often. To date I have only seen it once, and we look at a significant amount of S2 TCI imagery.

Nathan Jacobs (jacobsn@wustl.edu)
2023-10-11 23:01:40

*Thread Reply:* maybe it's a scale issue... the reflections need to be large enough to not get averaged out w/in a single pixel... honestly not sure

Patrick Beukema (patrickb@allenai.org)
2023-10-11 23:34:38

*Thread Reply:* thank you!

Devis Tuia (devis.tuia@epfl.ch)
2023-10-12 02:34:54

*Thread Reply:* I remember back in the days Quickbird had similar isues because the IR band was not acquired simulataneously to the RGB. SO you could see moving objects by making a false color composite

✔️ Patrick Beukema
Patrick Beukema (patrickb@allenai.org)
2023-10-12 11:44:55

*Thread Reply:* Appreciate all the responses. For the moment we are just going to tell our users that this is a known anomaly with the cause likely attributed to reflectance as Nathan mentioned. If we see this more often, and if it has an impact on our precision, then we will revisit and remove this effect — perhaps via a conventional noise filtering technique (erosion, dilation).

Kostas Papafitsoros (k.papafitsoros@qmul.ac.uk)
2023-10-12 05:56:24

Exciting PhD opportunity! I am advertising a fully funded PhD studentship at the Queen Mary University of London under the project "Data-driven Image Processing Methods with Applications to Wildlife Conservation" (deadline: December 1st). If you have a strong mathematical/computer science background, good coding skills, are interested in AI and you would like to put those for the good of wildlife conservation with a special focus on sea turtles, then this project is for you! 👉 More information here: bit.ly/46s8CFB (or drop me a direct message here)

❤️ Jaroslav Bezdek, Lukas Picek, Oisin Mac Aodha, Emily Lines, Yseult Hb, Suzanne Stathatos, Valentin Gabeff, Sara Beery, Katelyn Morrison, Akshay Paruchuri, Alexander Merdian-Tarko, Alex Brace
🎉 Lukas Picek, Dan Morris, Sara Beery, Katelyn Morrison, Jaka Cikač
🐢 Cameron Trotter, Robin Zbinden, Emilio Luz-Ricca, Sara Beery, Katelyn Morrison, Malte Pedersen, Jaka Cikač
Benjamin Hoffman (benjaminsshoffman@gmail.com)
2023-10-16 14:59:35

Hi everyone, I’m writing to share some tool-building work @Maddie Cusimano and I did which might be useful for other folks working with long audio recordings: https://github.com/earthspecies/voxaboxen.

This is a framework to train a sound event detection model directly on audio plus Raven selection tables. It differs from most other sound event detection approaches that we know, in that it uses an object detection framework (similar to YOLO) to box events, rather than a segmentation or classification framework. This approach is useful for labeling data at a fine temporal scale, which we wanted for looking at interactive vocal behavior of animals.

We talk more about our approach in this blog post: https://www.earthspecies.org/blog/voxaboxen-new-tool-to-support-annotation-of-large-audio-files. Happy to answer any questions, hear comments, and receive feedback! We can also help you to run this on your data.

earthspecies.org
Stars
7
Language
Python
🎉 Jon Van Oast, Maddie Cusimano, Elizabeth Campolongo, Dan Morris, gvanhorn, Enis Berk Çoban, John Martinsson, Yseult Hb, Dylan Van Bramer (she/her)
❤️ Nicolas Arrieta Larraza, Enis Berk Çoban, Aniruddha Saha, L, Marius Miron
👍 Steve Murphy, Eelke, Vincent Christlein, Cameron Trotter, Valentin Gabeff, Alan Stenhouse
Dan Morris (agentmorris@gmail.com)
2023-10-16 21:28:33

*Thread Reply:* How many other people immediately went to YouTube after reading this post and searched for "what sound does a meerkat make?". I did. It was worth it. Meerkats just got more adorable, which is saying something.

💚 Maddie Cusimano, Mitchell Rogers, Shir Bar, Benjamin Hoffman
😀 Alan Stenhouse
Nicolas Arrieta Larraza (n.arrieta.larraza@gmail.com)
2023-10-17 02:52:23

*Thread Reply:* Great work! I really enjoyed reading through the article 👍😊

😄 Benjamin Hoffman
😊 Maddie Cusimano
Louisa van Zeeland (cepstrum@gmail.com)
2023-10-17 15:41:35

Hi all, I want to pass along Tom Mustill's call for contribution to his upcoming work MESH x How to Speak Whale. They aim "to bring the voices of life in the ocean to new ears, to help artists discover new creative terrains and forge links through science and art to ocean conservation". All contributors will be fully credited. I must add that Tom's Whale Song Bath at the British Library this past summer was absolutely brilliant and I'm sure this event will be as well. Please see attached flyer for more information and his contact info 🐋

😎 Jon Van Oast, Justin Kay, Jason Holmberg (Wild Me)
🐋 Dan Morris, Alex Brace, Cameron Trotter
Patrick Beukema (patrickb@allenai.org)
2023-10-18 12:00:04

Hi all, I am writing to share several remote sensing computer vision models that we recently created and open sourced. • https://github.com/allenai/vessel-detection-sentinelshttps://github.com/allenai/vessel-detection-viirs These models support real time streaming computer vision (vessel detection) from NASA’s and ESA’s publicly available satellite imagery (VIIRS, Sentinel-1, and Sentinel-2) in our platform (Skylight: https://www.skylight.global/). Many of these models were built by or in close collaboration with Prior (AI2's CV division, and especially Favyen Bastani who is incredibly talented at geospatial AI). Our team has benefitted massively from all of the expertise and insights from users in this community and we are grateful for feedback about these models (engineering, modeling, documentation, or otherwise).

skylight.global
Stars
10
Language
Python
😍 Sara Beery, Gracie Ermi, Elie Alhajjar, Taiki Sakai - NOAA Affiliate, Henry Herzog, Jason Holmberg (Wild Me), Carl Boettiger
😎 Jon Van Oast, Shir Bar, Jason Holmberg (Wild Me)
👏 Jaka Cikač
🙌 Alan Stenhouse
Taiki Sakai - NOAA Affiliate (taiki.sakai@noaa.gov)
2023-10-18 13:25:15

*Thread Reply:* This looks awesome! Our group has some projects trying to understand how vessel traffic noise affects marine mammal presence, but we are only using AIS data. We're always concerned about how many vessels we might miss because they don't have AIS

🙌 Patrick Beukema
Steve Haddock (haddock@mbari.org)
2023-10-18 14:42:07

*Thread Reply:* Echoing the congrats. 🌏 We have used VIIRS to detect bioluminescent milky seas, and are working toward ML methods to automatically flag them in the data streams. Your project looks like a helpful launch point. https://www.nature.com/articles/s41598-021-94823-z

Nature
🙌 Patrick Beukema
Patrick Beukema (patrickb@allenai.org)
2023-10-18 14:59:02

*Thread Reply:* Very cool project! (and brings me back to swimming in bioluminescent bays in Puerto Rico with my wife.

FWIW There are downloading scripts (for creating datasets) NASA in there as well you may find useful. We use this service in a real time platform, and we actively maintain/monitor this code — if you notice any issues feel free to open an issue, and if you want any more details we would be happy to talk — we think VIIRS can be a very powerful technology, and there is even a third satellite that was recently launched (NOAA-21) that will enable us to have 3 passes globally every night. (I gave a talk on this model recently at the NOAA-AI conference: https://noaaai2023.sched.com/event/1SA4O/live-demo-shedding-light-on-shadowed-waters-geospatial-computer-visions-role-in-tackling-illegal-fishing)

https://github.com/allenai/vessel-detection-viirs/blob/d345c61725f117049577ee359b160d9cb1b9f61a/src/gen_obj_detection_dataset.py#L121-L133

noaaai2023.sched.com
👀 Steve Haddock
Dan Morris (agentmorris@gmail.com)
2023-10-18 22:28:48

There were a couple threads recently about training classifiers on MegaDetector crops... I mentioned a few months ago that I posted MD results for all the camera trap datasets on LILA; I'm posting a minor update to those. Partially because it's possibly useful and partially because the mountain was there, I ran the repeat detection elimination process on all of those results, which is basically a semi-automated process that removes a lot (but not all) (maybe not even most) of the false detections on rocks and sticks that happen a zillion times in a row. If you are training classifiers on LILA data using MD crops, the RDE results are the ones you want to use:

https://lila.science/megadetector-results-for-camera-trap-datasets/

Unrelated to the RDE, I also cleaned up some ambiguity about exactly what the base paths were in each of the results files.

Thanks again to @Doantam Phan who recently added a visualization of lots of small tiles to the RDE code, without which it wouldn't be possible to do this at this scale.

Let me know if these are useful, and/or if anything looks goofy in the files.

👍 Valentin Gabeff, Carly Batist, Sara Beery, Alan Papalia, Jon Van Oast, Timm Haucke, Emilio Luz-Ricca, Jason Holmberg (Wild Me), Sepand Dyanatkar, Alan Stenhouse
🙌 Elizabeth Campolongo, Mitch Fennell, Jason Holmberg (Wild Me)
🙏 Thor Veen, Peter van Lunteren, Jason Holmberg (Wild Me)
Sarah (thom1253@umn.edu)
2023-10-26 14:28:59

Hi all -- I'm a lurker here, but maybe its time to say something 🙂. I run a huge cam trap effort for the state of Idaho (tens of millions of images). I'd really love to talk to other people in similar situations & hear about any tools or tricks you have found that help you manage your flow of pics. We use the Megadetector and Timelapse. In addition, I've built a few R tools and scripts over the years for various things (renaming pics, QA/QC/ etc). I'm interested in pooling efforts/ideas/wishlists or just providing emotional support for each other 🙂 . Specifically, I am interested in a regionally-specific classifier (ID, WA, MT, CO, OR, WY?, UT?). I'm also clueless about hardware that might help us (right now, field staff copy SD cards to 2TB ext hard drives, those get mailed to me-- then I run MD etc). I'd like to disperse some of these activities to not-me, but need to build easy-to-use tools for that to happen. I have lots of ideas, little time, mediocre computing skills.. anyone interested in talking? reach out to me!

👋 Sara Beery, Elie Alhajjar, Omiros Pantazis, Boyu Zhang, Michael Bunsen, Talia Speaker, Enis Berk Çoban, Aakash Gupta, Toryn Schafer, David
🦌 Dan Morris, David
😀 Michael Bunsen
🙌 Carly Batist
👍 Luke Sheneman, Steve Murphy
👏 Alan Stenhouse
Michael Procko (xprockox@gmail.com)
2023-10-26 14:56:17

*Thread Reply:* Hey Sarah, myself and some prior colleagues spent a lot of time developing a workflow to rename, blur, and bin photos using MegaDetector, and I have recently been delving into training regionally-specific classifiers for Washington, British Columbia, Alberta (though only broad classification into carnivore vs. non-carnivore at the moment). Not sure if I can be of much help, but I’d be happy to chat if you have any specific questions!

😎 Jason Holmberg (Wild Me), Sara Beery
❤️ Tiziana Gelmi Candusso
Cara Appel (appelc@oregonstate.edu)
2023-10-26 15:06:47

*Thread Reply:* Hi Sarah (also hi Michael!), I am working on similar projects in Oregon and would be happy to chat as well

👋 Michael Procko, Michael Bunsen
😎 Jason Holmberg (Wild Me), Sara Beery
Sarah (thom1253@umn.edu)
2023-10-26 15:13:27

*Thread Reply:* Yes to both of you! I'll reach out to you each to schedule a shortish call. thanks for responding

👍:skin_tone_2: Cara Appel
👍 Michael Procko, Jason Holmberg (Wild Me), Michael Bunsen
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2023-10-26 17:40:35

*Thread Reply:* WildMe.org here in Oregon. Please include me in your discussions. We take imagery to the individual ID level where possible and would love to work with regional states on AI for aerial surveys too

👍 Sarah
Michael Bunsen (notbot@gmail.com)
2023-10-27 00:42:50

*Thread Reply:* Hi all! I am in Oregon as well and would love to join the conversation. I am part of an international group focused on automating insect monitoring. Some regional support would be very welcome 🙂

👍 Sarah
👍:skin_tone_2: Cara Appel
Michael Bunsen (notbot@gmail.com)
2023-10-27 00:46:23

*Thread Reply:* @Cara Appel I will be at OSU this weekend for the PNW Lepidopterists Workshop. They are going to show off a new collection in the newly remodeled Cordley Hall! https://osac.oregonstate.edu/events/44th-annual-pacific-northwest-lepidopterists-workshop

😍 Sara Beery
Tiziana Gelmi Candusso (tiziana.gelmi@gmail.com)
2023-10-27 11:19:51

*Thread Reply:* Hi Sarah, Our workflow here in Toronto uses megadetector and timelapse too!

😎 Sarah
Cara Appel (appelc@oregonstate.edu)
2023-10-27 14:09:37

*Thread Reply:* @Michael Bunsen Sounds so interesting! I will be out of town. Do you know Michael Getz at OSU? Looks like he's not on this channel, but yet another Michael in the AI world and he does pollinator work.

Michael Bunsen (notbot@gmail.com)
2023-10-27 14:37:12

*Thread Reply:* @Cara Appel I don't know that Michael but I am intrigued! I appears that he is a leader in the field of truffle dog training as well 😃

Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-10-30 01:18:19

*Thread Reply:* Hi Sarah - I have worked on creating a workflow and species level classification for Wildling Europe and another project in Telangana, India. In India we are working with local state governments, to setup their machine annotation pipelines for the millions of data points that they gather through regular monitoring. Happy to have a chat and discuss any potential collaborations.

😎 Jason Holmberg (Wild Me), Michael Bunsen
👍 Sarah, Prabath Gunawardane
Prabath Gunawardane (prabathg@gmail.com)
2023-10-31 00:30:31

*Thread Reply:* Hi Sarah, We do something similar in SF Bay Area with Felidae Conservation Fund Wildepod Project (https://felidaefund.org/projects/community/wilde-pod) We have about 1.6 million images (and counting) collected from a couple of hundred camera traps scattered around bay area. A small team of volunteer engineers (including me and @Abhay) built the wildepod.org site which allows the field volunteers to upload the image sets, which are then processed by MegaDetector and then annotated for species etc by a group of other volunteers through a custom interface. It's a Django site hosted on App Engine and I'm happy to share details with you based on your interest.

We've been meaning to implement automated species classification as well but so far this year keeping abreast of front-end features and site breakages etc. Hopefully early next year!

Would love to collaborate with folks doing similar work! 🙂

Felidae Conservation Fund
😎 Sarah, Michael Bunsen, Abhay
Sarah (thom1253@umn.edu)
2023-10-31 11:15:17

*Thread Reply:* Thats really neat Prabath - It sounds like what so many of us are wishing for -- I will take a look!

👍:skin_tone_5: Prabath Gunawardane
Serge Wich (sergewich@gmail.com)
2023-11-10 17:03:38

*Thread Reply:* I am not sure whether it can help for a complete workflow but we have a North American species detection model that we are trying to expand to more species and into result dashboards that are useful for users. https://www.conservationai.co.uk/ I am happy to discuss if this can be part of a workflow that is of use for you and others. We are also always looking for more images for species in North America to train the model to include more species of mammal.

👍 Sarah
Carly Batist (cbatist@gradcenter.cuny.edu)
2023-10-30 13:28:31

My final “who’s going” conference round-up for this year (😂) - I will be at the Wildlife Society conference next week in Louisville, KY and would love to meet up with other conservation tech and AI folks! My stuff:

  1. Workshop on acoustic monitoring (with some sprinkles of AI) on Sunday (Nov 5), 8am-12pm
  2. Podium pres. on using ecoacoustics to track invasive species in Puerto Rico
❤️ Suzanne Stathatos, Toryn Schafer, Enis Berk Çoban, Jason Holmberg (Wild Me), Sara Beery, Prabath Gunawardane, Santiago Ruiz Guzman
Toryn Schafer (tschafer@tamu.edu)
2023-10-30 14:51:57

*Thread Reply:* I am attending the conference Monday. I am in the symposium "Agent-based modeling for wildlife management". There is a get-together for the symposium planned afterwards at Down One Bourbon Bar Monday, Nov. 6th, at 6pm

🙌 Carly Batist
Carly Batist (cbatist@gradcenter.cuny.edu)
2023-10-30 14:59:09

*Thread Reply:* Awesome, thanks @Toryn Schafer!

Riley Knoedler (mknoedler@west-inc.com)
2023-11-08 12:14:53

*Thread Reply:* I'm here as well, come by the WEST booth to chat!

Olof Mogren (olof.mogren@ri.se)
2023-11-02 10:19:59

Priya Donti in Learning Machines! https://rise.zoom.us/j/208117140?pwd=SENUZTZ4SDdtc0tvcFdkNzlUQ2tNUT09

🙌 Justin Kay, Shir Bar, Carl Boettiger, Sara Beery
Olof Mogren (olof.mogren@ri.se)
2023-11-02 11:17:53

*Thread Reply:* Amazing talk. Intrigued by the possibilities that optimization in the loop machine learning can give!

Ben Weinstein (benweinstein2010@gmail.com)
2023-11-02 10:29:39

Does anyone know of airborne detection of humans in natural landscapes datasets? I know of https://www.crcv.ucf.edu/data/UCF-ARG.php and https://arxiv.org/pdf/2209.00128.pdf. I am writing a proposal to expand the DeepForest backbone models, marine mammals, terrestrial ungulates, etc. Should we do a human detection model? I can imagine it being very helpful in human-wildlife conflict, anti-poaching work, etc. I know many members of our community have data in this area and we could package it together as we move towards a general ecological object (including humans) detector for open imagery. What does the community think of making such a model available? Are we treading on ethically questionable ground? Potential for nefarious misuse is pretty high for airborne human detection model? Discuss.

❤️ Jon Van Oast, Kakani Katija
Sara Beery (sbeery@caltech.edu)
2023-11-02 11:03:48

*Thread Reply:* @Elizabeth Bondi-Kelly has a dataset for aerial detection in thermal data

Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-11-02 11:31:43

*Thread Reply:* Hi Ben, we have some experience in this context through our AirCap project. I will be glad to connect. you can check the project, pubs and dataset here : https://www.aamirahmad.de/projects/aircap/

flightrobotics
😎 Jon Van Oast
👍 Ben Weinstein
Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-11-02 11:34:41

*Thread Reply:* ... and all open source code here https://github.com/robot-perception-group/AirCap

Stars
18
Language
C
Elizabeth Bondi-Kelly (ecbk@umich.edu)
2023-11-02 14:52:44

*Thread Reply:* Thanks for tagging me @Sara Beery! Yes, here is the dataset: https://sites.google.com/view/elizabethbondi/dataset?authuser=0. I also think that the ethical discussion is incredibly important, and I'd like to refer to some great work that I've seen in this space - this is of course not exhaustive, and much has come from Wildlabs!

https://conbio.onlinelibrary.wiley.com/doi/10.1111/csp2.374https://besjournals.onlinelibrary.wiley.com/doi/10.1002/2688-8319.12033https://www.youtube.com/watch?v=Ge6_EF0z83Uhttps://fnigc.ca/ocap-training/https://news.usask.ca/articles/colleges/2020/engaging-northern-expertise-strengthens-ecological-science.phphttps://link.springer.com/article/10.1007/s13280-015-0714-0https://vimeo.com/771752163

sites.google.com
YouTube
} Cabot Institute for the Environment (https://www.youtube.com/@TheCabotInstitute)
❤️ Sara Beery, Alan Stenhouse
Patrick Beukema (patrickb@allenai.org)
2023-11-09 13:34:30

Does anyone here use ESA’s sentinel 1,2 datasets for a (near) real time streaming application? And if so what is your experience been like with the new (or old) API? We are running into occasional data availability issues, and not sure what we might do to improve (e.g. leverage a different data provider such as Sentinel-Hub).

Casey Youngflesh (caseyyoungflesh@gmail.com)
2023-11-13 12:50:08

*Thread Reply:* I’m currently working on something that relies on the Sentinel API, the goal being to DL/process imagery daily. I’ve found the API extremely clunky but have a pipeline that I believe works. I’m very much trying to avoid going with a downloader-pays model, which seem to be getting more common… Interested in getting your perspective here and happy to chat more.

Patrick Beukema (patrickb@allenai.org)
2023-11-13 12:54:51

*Thread Reply:* We are having quite a bit of friction with this and we are happy to pay but its not obvious if that would solve our problems. Its not trivial to support these NRT use cases, and requires significant resources to do so, and then there is the question of at what SLA. There are groups moving to the cloud/s3 (like NOAA for some of their products), which enables a more modern AWS API.

Our use case is rather demanding — we support organizations world wide and they need to know as soon as possible whether there are vessels somewhere/doing something they aren’t supposed to be doing). Our computer vision is actually quite fast in comparison to the downlink/transfer from ESA’s servers.

Patrick Beukema (patrickb@allenai.org)
2023-11-13 12:55:11

*Thread Reply:* | Its not trivial to support these NRT use cases, not us — I mean for NASA/ESA etc.

Patrick Beukema (patrickb@allenai.org)
2023-11-13 12:58:05

*Thread Reply:* FWIW We have had great experience with NRT NASA data (we use SuomiNPP/NOAA20, and soon NOAA21 satellites) — there servers are basically always on -- and we only see performance degradation when there is an actual on board satellite issue causing data corruption/preventing downlink to earth. We

Patrick Beukema (patrickb@allenai.org)
2023-11-13 12:58:44

*Thread Reply:* NASA’s NRT data is free — which is kind of surreal when you think about how serious/big of a lift supporting that volume of data and NRT on top of that

Patrick Beukema (patrickb@allenai.org)
2023-11-13 13:00:56

*Thread Reply:* Our goal is to open source the best solutions we find so that groups don’t have to reinvent the wheel every time. Here is the service for NRT processing of Suomi-NPP and NOAA-20 — I linked to it earlier in this channel, but didn’t underscore the downloading component, in case you find the code useful: https://github.com/allenai/vessel-detection-viirs and happy to chat of course as well

Stars
11
Language
Python
Casey Youngflesh (caseyyoungflesh@gmail.com)
2023-11-16 12:27:39

*Thread Reply:* Thanks for sharing the repo and for your take on this! Nice to know about the reliability of the Suomi/NOAA NRT data. From my memory Sentinel data is available within about 4 hours of acquisition (when DLing directly from the ESA Sentinel API) which I could see would not be ideal for your use case. I’m more concerned with satellite revisit time then time to data availability (once it’s in the hours range). Happy to share the code I have for Sentinel-2 once it’s cleaned up/fully operational.

Dan Stowell (dan.stowell@naturalis.nl)
2023-11-10 12:19:54

I've put some course material online for my "AI for Nature & Environment" including 2 video-lectures: https://github.com/danstowell/ai_nature_environment

😍 Sara Beery, Justin Kay, Timm Haucke, Taiki Sakai - NOAA Affiliate, Oisin Mac Aodha, Prabath Gunawardane, Dylan Van Bramer (she/her), Katelyn Morrison, Rita Pucci, Jennifer, Carly Batist
🙏 Patrick Beukema, Alexander Merdian-Tarko, Enis Berk Çoban, Kalindi Fonda, Maddie Cusimano, charlotte
🙌 Alan Stenhouse, Talia Speaker, Carly Batist, Julien Boussard
❤️ Thomas Radinger
Violet Turri (vturri@andrew.cmu.edu)
2023-11-10 13:11:54

Hi everyone! @Katie Robinson and I are AI researchers at the Carnegie Mellon University Software Engineering Institute. We are interested in exploring applications of large language models (LLMs) within the conservation space. If anyone is working on a project in this space, please shoot us a message and we can schedule a time to chat more about your work. Thanks!

👍 Katie Robinson, Ankita Shukla, Negar Sadrzadeh, Patrick Beukema, Collin Abidi, Jason Holmberg (Wild Me), Henry Herzog, Douglas Mbura, Katelyn Morrison
👍:skin_tone_3: Alan Stenhouse
Patrick Beukema (patrickb@allenai.org)
2023-11-10 13:40:13

*Thread Reply:* Hi we (Skylight) are exploring geospatial tuned LLMs (bespoke to Skylight) to remove barriers for entry/accelerate adoption/help growth etc — we built (and deployed) one for AI2's hackathon with AI2's Aristo team. I went to CMU and this caught my eye 🙂. Feel free to reach out to my email directly: patrickb@allenai.org Skylight is a marine conservation platform.

👍 Violet Turri
Patrick Beukema (patrickb@allenai.org)
2023-11-10 13:42:18

*Thread Reply:* https://www.skylight.global/

skylight.global
👍:skin_tone_3: Alan Stenhouse
Violet Turri (vturri@andrew.cmu.edu)
2023-11-13 11:03:06

*Thread Reply:* Hi Patrick! Thanks for the response, this sounds like a really interesting project. We’ll follow up over email to chat more.

Eric Orenstein (eorenstein@mbari.org)
2023-11-13 12:16:28

Job opportunity at the Turing Institute on Autonomous Systems for Biodiversity Monitoring. Help develop algorithms for adaptive sampling on autonomous underwater vehicles!

cezanneondemand.intervieweb.it
🙌 Justin Kay, Carly Batist, Oisin Mac Aodha, Violet Turri, Katie Robinson, Shir Bar, Sara Beery, gvanhorn, Katriona Goldmann, Suzanne Stathatos, Ted Schmitt
😀 Michael Bunsen
Blair Costelloe (blaircostelloe@gmail.com)
2023-11-20 02:56:49

Would anyone be interested in trying BlueSky? I have a couple invite codes available

Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2023-11-21 13:47:19

*Thread Reply:* Sure. jason@wildme.org please.

Graham Wallington (graham.wallington@natureeye.com)
2023-11-23 10:12:46

*Thread Reply:* I too would love to have a look.

Olof Mogren (olof.mogren@ri.se)
2023-12-11 04:12:26

*Thread Reply:* I created a Bluesky feed on AI for climate change! https://bsky.app/profile/did:plc:fyw3wpo7owjjc362rp42upzo/feed/aaabbqsgt6bd2

Olof Mogren (olof.mogren@ri.se)
2023-12-11 04:18:06

*Thread Reply:* @Jason Holmberg (Wild Me), @Graham Wallington, did you register? I cannot find you at Bluesky.

Vardaan Pahuja (vardaanpahuja@gmail.com)
2023-11-21 17:51:23

Is there a survey paper about different camera trap datasets out there? I am looking for a resource for all available camera trap datasets. I am wondering if there are any datasets that aren't covered by LILA.

Dan Morris (agentmorris@gmail.com)
2023-11-21 19:50:51

*Thread Reply:* I don't know if this counts as "not covered by LILA", but on the "other datasets" page on LILA, I list a few other camera trap datasets in the section called "Terrestrial wild animal images (ground-based sensors)":

https://lila.science/otherdatasets#images-terrestrial-animals-ground

Vardaan Pahuja (vardaanpahuja@gmail.com)
2023-11-21 21:51:50

*Thread Reply:* Thanks a lot!

Robin Ranabhat (robinnarsingha123@gmail.com)
2023-11-26 02:12:24

Apologies in advance if this is not the right place for this inquiry. I recently realized this new field computational sustainability through works from professor Bistra Dilkina, Carla Gomes. I have decided this is the kind of work I want to do. My current skill-set is limited to software and applied machine learning. But this endeavor requires having good foundation of various sub-domains of CS like mathematical thinking, Operations-Research e.t.c and out of CS. It's would be really really helpful to know if there are relatable Graduate Programs that specifically focus on this.

Katelyn Morrison (kcmorris@andrew.cmu.edu)
2023-11-26 09:31:16

*Thread Reply:* I haven't seen specific programs, but I have seen people apply to either CS-related programs or another domain (i.e., civil engineering or engineering and public policy) and then work with/get advised by professors who work on CompSust related topics. There are also lots of summer schools like CV4Ecology summer school and the Climate Change AI summer school. 🙂

Robin Ranabhat (robinnarsingha123@gmail.com)
2023-11-27 10:22:10

*Thread Reply:* Thank you !!

Olof Mogren (olof.mogren@ri.se)
2023-11-26 06:56:36

Looking for people to follow on Bluesky! Looks like a promising alternative to Twitter. Here is my profile: https://bsky.app/profile/olofmogren.bsky.social

Bluesky Social
Blair Costelloe (blaircostelloe@gmail.com)
2023-11-27 03:24:10

*Thread Reply:* I still have a spare code if anyone wants to sign up!

❤️ Olof Mogren, Aakash Gupta
Olof Mogren (olof.mogren@ri.se)
2023-11-27 07:50:16

*Thread Reply:* I have now created a public feed for AI for climate change! https://bsky.app/profile/did:plc:fyw3wpo7owjjc362rp42upzo/feed/aaabbqsgt6bd2

Sako Arts (sako@fruitpunch.ai)
2023-11-27 04:42:43

CALL FOR BEAR 🐻 DATA Hi all, Together with BearID :bearid: , ARM, Hack the Planet and NXP, we at FruitPunch will be organizing an bear identification Challenge. This Challenge will cover a couple of things: low-powered bear classification, bear detection, bear face detection and bear identification. While we have some datasets, we are looking for additional datasets to end up with a more robust model. Do any of you have or know of a dataset that includes annotated images of bears? The annotations can be anything like, simply having wild bears in them, boundingbox around the bear, boundingbox around the bear face and identified individual bears.

Thanks in advance and have a very fruitful day! 🍉

:bearid: Ed Miller
Casey Clifton (caseyclifton@proton.me)
2023-11-27 04:45:15

*Thread Reply:* Apparently there's a 'bear guy' in alaska with a huge number of images, but he's pretty off the grid. A friend was trying to contact him recently - ill check and see if he got through!

Sako Arts (sako@fruitpunch.ai)
2023-11-27 05:31:29

*Thread Reply:* Oeh sounds exciting, would be great to get in touch!

Casey Clifton (caseyclifton@proton.me)
2023-11-27 04:49:30

Does anyone have experience or insights in the use of eDNA vs camera traps (or other 'traditional' sensing approaches) for monitoring biodiversity? Both have needs for AI, and im looking to pick an area to focus working on.

I've just been on a 3 month camera trap project covering 35 different sites, and having recently learned of eDNA am wishing we took soil samples instead.

In the short term there are issues of sequence data availability, costly lab services, etc, and technical challenges like DNA degredation, but im assuming they are solvable with sufficient time and resources improving the tech and sampling methods.

Are there any fundamental unsolvable limitations of eDNA (or anyone you might recommend i reach out to on the matter)?

Any thoughts are much appreciated 🙏 🙂

Carly Batist (cbatist@gradcenter.cuny.edu)
2023-11-27 08:52:39

*Thread Reply:* Any of the folks at NatureMetrics or SimplexDNA would be great for eDNA advice!

🙌 Casey Clifton
Talia Speaker (talia.speaker@wildlabs.net)
2023-11-27 13:21:53

*Thread Reply:* My team at WWF has done some comparison studies on species captured, costs, etc. between the two methods - happy to connect you if it's of interest! https://www.nature.com/articles/s41598-021-90598-5

Nature
🙌:skin_tone_3: Alan Stenhouse
🙌 Sam Lapp
Casey Clifton (caseyclifton@proton.me)
2023-11-27 23:24:26

*Thread Reply:* Thanks @Talia Speaker! I'll read this and come back to you

Riley Knoedler (mknoedler@west-inc.com)
2023-11-28 14:27:53

*Thread Reply:* Mark Davis over at Illinois Natural History Survey is doing some interesting work with eDNA, just saw a presentation from him comparing the species found with eDNA vs camera traps (surprisingly little overlap!)

🙌 Casey Clifton
Casey Clifton (caseyclifton@proton.me)
2023-11-28 20:59:37

*Thread Reply:* That's interesting @Riley Knoedler - do you have a link to the pres i could see?

Riley Knoedler (mknoedler@west-inc.com)
2023-11-29 10:58:11

*Thread Reply:* @Casey Clifton It was at the REWI Solar Symposium conference so I don't have a way to distribute the presentation slides, but you might be able to discuss it with Mark! I'm not sure if that work is published yet.

👍 Casey Clifton
Paul Allin (allinpaul@gmail.com)
2023-12-06 08:27:37

*Thread Reply:* We have just finished the data collection for comparison of eDNA methods (air, dust, and water ) to camera trap data. Happy to chat

🎉 Carly Batist
👍 Casey Clifton
mimi (arandjel@eva.mpg.de)
2024-01-17 05:28:27

*Thread Reply:* here's our paper on iDNA and camera traps https://onlinelibrary.wiley.com/doi/full/10.1002/edn3.46 Biggest limitation in our mind is how to go beyond species detection and into population estimates.

👍 Ștefan Istrate, Carly Batist, Casey Clifton
Edwin Reed-Sanchez (ereedsanchez@gmail.com)
2023-11-29 13:38:55

Hello Everyone, I am a researcher at CUNY and looking to deploy an automated camera trap system. Ideally using wifi enabled camera (video/stills). I would prefer it over any Cellular based camera. Are there any cameras that you recommend? Thank you.

Carly Batist (cbatist@gradcenter.cuny.edu)
2023-11-29 18:48:21

*Thread Reply:* Oh hey I’m at CUNY too!! I’m a PhD student at the GC (Biological Anthro dept). Based out of Hunter though.

Carly Batist (cbatist@gradcenter.cuny.edu)
2023-11-29 18:49:24

*Thread Reply:* For your question - there are lots of good resources on WILDLABS about different trail cams, and it might also be a good place to re-post your question as well.

wildlabs.net
Edwin Reed-Sanchez (ereedsanchez@gmail.com)
2023-12-01 13:21:23

*Thread Reply:* Thanks!

Edwin Reed-Sanchez (ereedsanchez@gmail.com)
2023-12-01 13:21:37

*Thread Reply:* What are you currently using/

Filippo Varini (fppvrn@gmail.com)
2023-11-30 08:19:55

Hello everyone, In the last few months, I have been interested in the use of AI and Machine Learning to develop better Ocean Biodiversity Monitoring technologies.

I would love your help in exploring the field and I aim to share a report that would benefit the whole community!

I would like to map out all advanced marine biodiversity monitoring technologies. So far I investigated eDNA (and Metabarcoding), BRUVs, UVC, Scientific Fishing, and Bioacoustics. What else should I look into?

I am particularly interested in successful applications of such technologies in monitoring Ecosystem Health and the success of Marine Protected Areas. Do you know any related study or project? What papers should I read? What organisation should I look into? Who should I speak with?

Thank you so much!

🌊 Talia Speaker, Katelyn Morrison, Sara Beery
Florence Cuttat (florence.cuttat@nature-counts.org)
2023-12-06 10:24:09

*Thread Reply:* Hi Filippo, We are working on using AI for fish monitoring. --> streamocean.io Let me know if you have some questions

✅ Filippo Varini
Olof Mogren (olof.mogren@ri.se)
2023-11-30 10:14:02

@Ben Weinstein talking right now 🙂 https://rise.zoom.us/j/208117140?pwd=aWJsbnIyai92RUk1cjcrcFMxWDROUT09

👏 Malte Pedersen
Olof Mogren (olof.mogren@ri.se)
2023-12-01 03:43:27

*Thread Reply:* Here is the recording of this excellent talk! https://www.youtube.com/watch?v=7yXsFFbWgcs&list=PLqLiVcF3GKy0-jZFGg-VqLzh51LqCfduN&index=1

YouTube
} RISE Research Institutes of Sweden (https://www.youtube.com/@RiSeSweden)
🙏 Michael Bunsen
👍 Michael Bunsen
Carly Batist (cbatist@gradcenter.cuny.edu)
2023-11-30 15:15:39

For any of you on more of the software side - still time to apply for our Software Engineer position at Rainforest Connection (RFCx) & Arbimon!💻

Join the team to help develop software for biodiversity monitoring. Fully remote position, open until filled. Full job description, responsibilities and requirements here & attached below.

To apply: please submit your resume and a cover letter to contact@rfcx.org with the subject “Position: Software Engineer” and tell us a bit about yourself!

❤️ Lucia Gordon
Devis Tuia (devis.tuia@epfl.ch)
2023-12-01 02:33:19

just a reminder for this beautiful position for Deep SDMs in my team! If you feel like your next life step could be in Switzerland, please apply! We will be starting looking at the application in 10 days!

Pleasewrite me a msg if you have questions, happy to give info!

} Devis Tuia (https://aiforconservation.slack.com/team/U032W85K4SD)
👍 Oisin Mac Aodha, Justin Kay, Olof Mogren, Omiros Pantazis, Robin Zbinden, Lloyd Hughes, Yonghao Xu, Sara Beery, Thomas Radinger
Otto Brookes (otto.brookes@bristol.ac.uk)
2023-12-02 09:13:35

📣 A reminder that the deadline for the IET Computer Vision Special Issue: Camera Traps, AI, and Ecology is coming up! More details here!

🎉 Sara Beery, Jon Van Oast, Dante Wasmuht, Vardaan Pahuja
Vardaan Pahuja (vardaanpahuja@gmail.com)
2023-12-06 00:37:19

*Thread Reply:* Hi, thanks for sharing this! I have a few questions, will send them in private chat.

✅ Otto Brookes
Piotr Tynecki (piotr@tynecki.pl)
2023-12-05 09:37:42

Greetings from :flag_pl: 🦬 🐗 🌳!

👋 Elizabeth Campolongo, Sara Beery
🦌 Kalindi Fonda
Fadel (fadel.seydou@gmail.com)
2023-12-06 07:57:29

Hi all, hope you're doing great.

@Paul Allin and myself are working on the topic of "automated aerial census of large herbivores in southern africa using ML". Our specificity is that we want to experimentally assess the influence of flight/data acquisition parameters on the accuracy/precision/recall of an ML-based wildlife detector.

We are currently facing a few issues from which we would appreciate your inputs: • How to label efficiently a large amount of data with unknown species and limited workforce? Our current approach is to use a yolov8s model (trained to localize herbivores wildlife on similar data) to obtain bounding boxes and then attribute classes manually. Do you see flaws in our approach or have recommandations? • Do you have ideas on how we can improve the generalization of our wildlife localizer? • How can we account for False Negatives (introduced by our wildlife localizer) in our labeling workflow? Thank you🙌

Devis Tuia (devis.tuia@epfl.ch)
2023-12-06 08:09:06

*Thread Reply:* Hi Fadel, I think the AIDE platform could help you out (and sorry everyone if I bring that one up often, it’s really not for bragging). With AIDE you can start labeling your images and use a pre-trained model tuning on those. Then the trained model will propose you candidate detections to review within an active learning loop. You can customize species as you go.. You can contact @Benjamin Kellenberger, who is the mastermind behind it.

➕ Sara Beery, Ben Weinstein, Robin Zbinden, Dan Morris, Caleb Robinson
Fadel (fadel.seydou@gmail.com)
2023-12-07 10:00:32

*Thread Reply:* Thank you @Devis Tuia, I will look into it 😁

David Russell (davidrussell327@gmail.com)
2023-12-07 15:44:54

*Thread Reply:* Hi Fadel, you may be interested in a tool that my supervisor Derek Young at UC Davis developed for scripting runs of the Agisoft Metashape photogrammetry software. He also has some work assessing the impact of flight parameters on tree detection using geometric methods: https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.13860.

Stars
12
Language
Python
💯 Fadel
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2023-12-14 16:37:13

*Thread Reply:* Hi @Fadel, we're building our second round aerial flight detector now for the KAZA Transfrontier survey. Happy to share ideas. @Lasha Otarashvili is the primary on the training effort.

🙌 Fadel, Paul Allin
Joseph Dimos (joedimos@gmail.com)
2023-12-07 08:06:08

Greetings everyone! A bit about me and my ongoing work. I am working on data assimilation models with alignment in ocean/atmosphere dynamics that draws upon the notion of ‘online/offline’ learning and associated sub-models. In addition to this, I am working on defining an objective function for maps of dynamical snapshots. I working on some biodiversity problems too, namely those that employ deep learning. For instance, I am optimising some YOLO algorithms for post-processing of camera trap data. I am, in particular focusing on bees with YOLO. I am adapting a dynamical model that is deployed in a YOLO framework (with ResNet50 R-CNN) to learn the dynamics of an environment. This is done with the COCO dataset, to which I’m working with fairly deeply.

👏 Jonah Fox, Dan Morris, Jon Van Oast
Dan Stowell (dan.stowell@naturalis.nl)
2023-12-07 09:06:04

Job in Austria? Junior Group Leader, Machine Learning in Acoustics https://www.oeaw.ac.at/fileadmin/subsites/Jobs/ISF/ISF156JGL223.pdf

🔈 Oisin Mac Aodha
🎉 Jon Van Oast
Casey Clifton (caseyclifton@proton.me)
2023-12-08 03:14:54

Hello all, does anyone have any scripts/resources/papers/tips on fine-tuning MegaDetector (e.g. on images taken in a specific region) that they'd be open to sharing? Thanks 🙂

Peter van Lunteren (contact@pvanlunteren.com)
2023-12-08 03:26:43

Hi Casey, here are some tutorials:

  1. https://www.kaggle.com/code/agentmorris/fine-tuning-megadetector
  2. https://pub.towardsai.net/train-and-deploy-custom-object-detection-models-without-a-single-line-of-code-a65e58b57b03 The second one is more for adding custom labels (i.e., species identification), but can also be used for fine-tuning the existing labels (animal, person, verhicle).
kaggle.com
Medium
Reading time
9 min read
👍 Casey Clifton, Dan Morris
Casey Clifton (caseyclifton@proton.me)
2023-12-08 03:31:04

*Thread Reply:* Thanks!

Dan Morris (agentmorris@gmail.com)
2023-12-08 12:21:36

*Thread Reply:* Casey, are you trying to fine-tune MegaDetector in the sense of making it work better on species or camera angles it doesn't work well on, or in the sense of adding a species classification stage? Just a reminder that if you're referring to the latter, you may find it easier not to fine-tune MegaDetector at all, and instead train a classifier on the crops that come from MegaDetector. It's not obvious one way or the other, but if I had to place my bets on one approach, I would place my bets on the two-stage approach, and I think Peter's conclusion has been the same (Peter, don't let me put words in your mouth, but I vaguely remember you saying that).

If it's the former - making MD work on things it kind of works on, but not as well as you'd like - fine-tuning is a good solution. I've just done this for the first time recently (for a couple of reptile-heavy use cases), and haven't written up a tutorial yet, but can provide some tips and tricks if this is your scenario.

Somewhere in between would be the case where MD works well on your animals, but you want to reduce false positives for a particular ecosystem or set of cameras; I don't know that anyone has looked into this, but I think I would recommend a post-hoc classifier in this case (junk/not-junk) rather than fine-tuning. But I would only bother doing this if you have an egregious number of false positives.

Peter van Lunteren (contact@pvanlunteren.com)
2023-12-08 13:59:34

*Thread Reply:* Yes, absolutely @Dan Morris. I've tried fine-tuning MD in the sense of adding a species classification stage, but found that that just costs lots of training data, processing power and yielded mediocre results. If you want species classification, I would definitely work with a custom classifier on top of MD. But if it is for improving detection rate (of the original classes animal, vehicle, and person) on "images taken in a specific region", it might be a good approach - but I haven't tried it out myself 😉

👍 Dan Morris
Casey Clifton (caseyclifton@proton.me)
2023-12-10 18:02:49

*Thread Reply:* Hey @Dan Morris thanks for the info! I'm looking at the former. Specifically, using MD in an Australian setting, where im finding it misclassifies native plants as animals, and also just doesn't recognize some animals - there seem to be more false negatives than other papers have reported. Would love to hear of any tips or tricks you have for this.

Dan Morris (agentmorris@gmail.com)
2023-12-10 18:48:40

*Thread Reply:* Gotcha. We may quickly get to the point where it's faster to do this by email (especially if we need to look at sample images), but I'll take a shot here, and if we move to email, I solemnly swear that if I don't report back to this Slack with whatever we learn, I owe everyone here a granola bar.

But, next questions:

  1. Confirm that you're using MDv5?
  2. Are you running MD via rundetectorbatch.py, via EcoAssist, or via something else?
  3. What confidence threshold are you using?
  4. Are the misses** reptiles, mammals, birds, or some other scary thing you have in Australia that we don't even have an American word for? Like poison-dragon-spiders, or poison-spider-dragons.
  5. Are the misses** on things that are really obvious to your eye, or are they tail-sticking-into-the-corner kind of misses?
  6. Very rough ballpark: about what percentage of your images are actually empty?
  7. Very rough ballpark: about what percentage of the animals are being missed**?
  8. Very rough ballpark: about what percentage of MD's positives** are false positives? For (6)/(7)/(8), I know the real answer is "it varies a lot across cameras", but just try to average overall all your cameras.

**All those asterisks are there as a reminder that "positive", "negative", "miss", etc. are just placeholder words for now, they're undefined in this thread until we know the answers to (1)/(2)/(3).

😂 Casey Clifton
Casey Clifton (caseyclifton@proton.me)
2023-12-11 07:58:56

*Thread Reply:* Happy to email / share pics / supply granola

  1. Yep v5
  2. Via rundetectorbatch.py
  3. I've tried a few levels and if i just score a classification of animal vs not animal i get the following precision/recall values a. Conf 0.2 = 0.74 / 0.76 b. Conf 0.5 = 0.91 / 0.7 c. Conf 0.8 = 0.97 / 0.53
  4. Most of them are night shots of Quenda - the species we're investigating that is native to Australia. As far as i know they're not poisonous, but it's about the only thing that hasn't bitten us yet so who knows...
  5. They are usually obvious when you look at multiple images in a row (we take 5 in 5 seconds for each trigger) because the animal is the only thing that's moving, but they're basically impossible to detect from a single shot with no context. a. Has anyone looked into detection with temporal context to solve this? I think it'd be essential because Quenda are nocturnal, small, and fast, so they can be pretty blurry, easily hide behind a small bush, etc.
  6. About 85% of the ones we've labelled so far are actually empty, but i know there are sites we havent labelled yet that are 99% empty
  7. As per recall numbers above, about 25-45%
  8. False positives were high but since removing repeating bounding box locations a lot of the FPs due to plants are removed so this is negligible at the moment
Dan Morris (agentmorris@gmail.com)
2023-12-11 12:01:29

*Thread Reply:* Super-helpful. And also I just learned what a quenda is.

At the end of the day, if the thing you're looking for is only visible in image sequences, MD can't see it. So I think there are two questions here: (1) what are your possible paths forward for finding things that are only visible in multiple images? and (2) how can you get the most out of MD? I'll take a shot at (1) here, though hopefully others will chime in too, since I have no particular expertise there, then I'll address (2) in a separate reply.

Re: moving beyond single images...

AFAIK, there is no off-the-shelf tool that will help you find animals that are mostly only visible in sequences. There's Sara's work on Context-RCNN:

https://arxiv.org/abs/1912.03538 https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/context_rcnn.md

...but I would say that's solving a lot more than just the problem of finding a quenda scurrying in the bushes, and is therefore justifiably pretty complex, and I don't know of anyone that's deployed it.

If you are willing to get into the business of training custom models but don't quite want to leave the universe of RGB images (it's hard to overstate how much more complicated the world gets when you leave the world of RGB images, you lose 10 years of computer vision infrastructure development built around RGB images and video), I was peripherally involved with a paper that proposed a bunch of relatively simple ways to fit sequence information into RGB images... things like "swap out the blue channel and replace it with a motion channel":

https://arxiv.org/abs/2005.00116

I'm still a fan of that approach, and if you have enough data, it's relatively easy to explore (since you're still training RGB models). But again, AFAIK no one has deployed this. I think it would be particularly neat to train a detector using one of these approaches, which was outside the scope of that paper (which was focused on classifiers), but you would need to get a whole bunch of boxes, and then do this complicated research-y thing.

And although this might not help you now, if you can capture and label a bunch of videos, you might try training a custom model via Zamba Cloud:

https://www.zambacloud.com/

👍 Casey Clifton
Dan Morris (agentmorris@gmail.com)
2023-12-11 12:34:08

*Thread Reply:* Re: getting the most out of MD...

FWIW, even for easy datasets, I never use a confidence threshold higher than 0.2 with MDv5, and for difficult datasets, I'll go as low as 0.05. So I think you have some room to reduce your confidence threshold, though I don't think it will immediately get your recall up from 0.76 to something super-respectable, because invisible is still invisible. But if 85% of your images are blank, IMO you have a lot of room to compromise on precision and still have AI save you lots of time, if you can get to adequate recall.

But the main tricks I use for difficult datasets are, from most to least important:

  1. Repeat detection elimination... good for maybe a 2-3% precision bump at reasonable thresholds. For easy datasets, I typically spend about 5 minutes per million images on this, maybe for difficult datasets, this goes up to 10 minutes per million images. Once you get the hang of it, this will always increase precision with no impact on recall. This is particularly important for cases where you want to user super-low confidence thresholds.
  2. Test-time augmentation, by doing inference via this script instead of rundetectorbatch. This will raise the confidence of everything a little, but it will particularly raise the confidence of low-confidence animals. I.e., when difficult animals are present, there is real signal there, it's not just raising the confidence of every detection. In principle this can improve both precision and recall, but in practice, I've only ever seen it at least slightly hurt precision, while always either helping recall or leaving recall the same.
  3. Combining results from MDv5a and MDv5b, using this script. Your case sounds like a difficult one that may be out of reach of one-image-at-a-time ML, but I will DM you to see what we can squeeze out of MD.
Casey Clifton (caseyclifton@proton.me)
2023-12-11 19:36:41

*Thread Reply:* This is great, thanks. Will reply in DMs!

Talia Speaker (talia.speaker@wildlabs.net)
2023-12-08 12:03:19

📣 Important news from WILDLABS: Survey & Funding Opportunity! 📣

🌎 Help shape the sector: If you're developing or using AI or any other digital technologies for conservation, please take a few minutes to complete and share the 2023 State of Conservation Technology Survey. This helps capture what's working and what the needs are in the community, and also builds an evidence base for all of our work: https://colostate.az1.qualtrics.com/jfe/form/SV_e5kiopCmrZXX1KS

💵 Apply for funding: There are $60,000, $30,000 and $10,000 grants available for up to 14 conservation technology projects through the brand new WILDLABS Awards! AI for conservation projects are totally fair game. Closes January 14th: https://wildlabs.net/funding-opportunity/wildlabs-awards-2024-supporting-accessible-affordable-and-effective-innovation

colostate.az1.qualtrics.com
wildlabs.net
🎉 Carly Batist, Jon Van Oast, Suzanne Stathatos, Adrien Pajot, Piotr Tynecki, Ando Shah, Stephanie O'Donnell, Ed Miller
Talia Speaker (talia.speaker@wildlabs.net)
2023-12-08 12:24:33

*Thread Reply:* For questions on the awards, reach out to our new program manager @Adrien Pajot 🙂

Ben Weinstein (benweinstein2010@gmail.com)
2023-12-08 13:41:46

Here is one for the community to process that I think gives us a good sense of where we are and where we need to go. I reviewed a ecological machine learning paper for 2 rounds at an unnamed but prestigious journal. Good paper, needed a bit of tightening. I asked for them to make data available. They posted it online without complaint. I recommended to accept the paper, explicitly saying "Reviewer #1: Thank you for making the data available. This is an excellent paper." Then in the published article they did not include the data link and put "Data will be made available on request." in the data availability statement, which is in separate form from what the reviewers see. Even though the data is currently still posted online (I went back to the response to reviewer). While lots of harmless explanations exist, it doesn't look great. As reviewers, keep your eyes out for this.

😢 Ștefan Istrate, Sara Beery, Caleb Robinson, David Russell, Subhransu Maji, Akash Nagaraj, Suzanne Stathatos, Enis Berk Çoban, Mitch Fennell, Toryn Schafer, Emilio Luz-Ricca
😮 Jon Van Oast, Michael Bunsen, Enis Berk Çoban
Dan Stowell (dan.stowell@naturalis.nl)
2023-12-12 12:36:43

*Thread Reply:* I think you should raise this with the editors. The article should be amended with an erratum, since it deviates from the article that was accepted (whether by "accident" or not). The editors should be willing to handle this.

Ben Weinstein (benweinstein2010@gmail.com)
2023-12-12 12:37:16

*Thread Reply:* ya, update here, editors said they are looking into it.

👍 Dan Stowell
Luke Sheneman (sheneman@uidaho.edu)
2023-12-08 17:42:53

For anybody working with video of wildlife, etc. I wrote a little convenience tool around YOLOv5 to extract video clips from source MP4 video files that contain animals. It is intended to be used with models like MegaDetectorv5**_ to efficiently process large collections of videos to maximize overall throughput. It performs inference at specified intervals (e.g. every x seconds). Using multiprocessing, it can process multiple videos simultaneously and uses locking to allow safe concurrent access to the GPU. In reality, I/O will be your bottleneck with video....not YOLOv5 inference. It spits out clips to an output folder and builds a small metadata report file which describes the clips relative to their source video file and includes summary statistics on the confidence scores for the clips. Hopefully you can find it useful!

https://github.com/sheneman/tigervid

Language
Python
Last updated
2 months ago
❤️ Suzanne Stathatos, Yseult Hb, Sara Beery, Charlotte Mallo, Stephanie O'Donnell
👍 Dan Morris, Shir Bar, Sara Beery, Joseph Dimos, Otto Brookes, Jon Van Oast, Aakash Gupta, Evan Eskew, Valentin Gabeff, Kakani Katija, Alasdair Davies, Martin Marzidovsek
👍:skin_tone_3: Alan Stenhouse
Ben Weinstein (benweinstein2010@gmail.com)
2023-12-08 17:47:16

*Thread Reply:* Thats great, I've been hoping someone would do this and update. I still get alot of downloads of https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.13011, which is getting pretty old at this point. Can I start pointing users towards you? What kinds of models are in there to start?

Luke Sheneman (sheneman@uidaho.edu)
2023-12-08 17:54:24

*Thread Reply:* That looks really cool! Its currently just a wrapper around YOLOv5, so in theory it will work with any YOLOv5 weights. I am using it with a collaborator from University of Michigan with MegaDetector_v5 to detect simple presence/absence of animals in a giant video collection but it should work with other weights (species classifier, etc.) as well with minimal changes.

Dan Morris (agentmorris@gmail.com)
2023-12-08 21:37:32

*Thread Reply:* I have one super-important question: did you make that ASCII art tiger?

🙂 Jon Van Oast
😂 Stephanie O'Donnell
Joseph Dimos (joedimos@gmail.com)
2023-12-10 13:13:20

*Thread Reply:* Working on a neural pipeline for YOLOv5 myself. That way, feature extraction is optimised for data assimilation tasks

Anna Willoughby (arwill19@gmail.com)
2023-12-31 14:13:14

*Thread Reply:* oo will take a look at using this.

Edward Bayes (bayesbayes@gmail.com)
2023-12-13 13:17:43

Hi everyone and happy holidays! 🎄 It’s been so awesome to see the explosion in semi-supervised, transformer-based models in recent months from general models like SAM (and now EfficientSAM) to specialised models like BioCLIP. I’ve heard smatterings of conversations about applications in conservation here, but was wondering if anyone has done any thorough evals on how they compare in terms of accuracy and speed to larger supervised models like MD? If anyone is down to hop on a call I’d love to brainstorm!

P.S. I’ve created a demo to test such models in the browser using ONNX runtime (no backend). It has CLIP and MobileNet at the moment, but I’m planning on adding other models later to try to do head to head evals of various models.

❤️ Elizabeth Campolongo, Sara Beery, Andy Viet Huynh, Emilio Luz-Ricca, Anton Alvarez
👀 Stephanie O'Donnell, Jeremy Forest
Carly Batist (cbatist@gradcenter.cuny.edu)
2023-12-13 13:31:33

*Thread Reply:* @Jason Holmberg (Wild Me) @Sara Beery

Antonio Ferraz (antonio.a.ferraz@jpl.nasa.gov)
2023-12-13 19:00:50

Hi Everyone; what are the best international conferences to learn about AI Foundational Models and RNN/LSTM Models in the context of Satellite Earth Observations for monitoring ecosystems (e.g. monitoring ecosystem change including land cover/land use change)? Thank you in advance!

❤️ Jon Van Oast
Ruben Remelgado (ruben.remelgado@gmail.com)
2023-12-14 02:54:02

*Thread Reply:* Geo BON

Devis Tuia (devis.tuia@epfl.ch)
2023-12-14 04:02:04

*Thread Reply:* EarthVision at CVPR (though the focus in on remote sensing applications at large rather than ecosystems)

✅ Antonio Ferraz
Gabriel Tseng (gabrieltseng95@gmail.com)
2023-12-14 10:31:39

*Thread Reply:* This may also be of interest: https://aiforconservation.slack.com/archives/CLWGQ4BJ6/p1702567887929979

} Gabriel Tseng (https://aiforconservation.slack.com/team/U014B1TAPAB)
✅ Antonio Ferraz, Gedeon
Gabriel Tseng (gabrieltseng95@gmail.com)
2023-12-14 10:31:27

Hi everyone!

I wanted to share a workshop that might be relevant here: Machine Learning for Remote Sensing at ICLR 2024. It will take place in person in Vienna on May 11th, 2024, and have a virtual participation option.

> The Machine Learning for Remote Sensing workshop will provide a platform for machine learning research that is relevant for applications in remote sensing and/or environmentally important. A panel of domain scientists from international organizations, such as the IAEA or the Red Cross will discuss the role of data and benchmarks in applied machine learning and our keynote speakers are leading researchers in the intersection of machine learning and remote sensing. We also invite you to submit to the workshop; the submission deadline for a 4-page non-archival research paper is February 03, 2024. The detailed call for papers is available here

👍 Justin Kay, Devis Tuia, Olof Mogren, David Russell, Ronny Hänsch, takatomi-k, Sara Beery, Gedeon, Chris Yeh, Robin Zbinden, Jason Holmberg (Wild Me), Nico Lang, Joseph Dimos, Thijs van der Plas
👋 Sara Beery, Gedeon, Chris Yeh, Jason Holmberg (Wild Me)
Miaomiao (mzhang@hbs.edu)
2023-12-14 22:33:18

Good evening everyone! I am sharing an exciting opportunity in Sustainability x AI. See the poster, a video, or the blurb here for detail:

Get ready to navigate solutions for a sustainable future using the power of generative AI! The action kicks off with a 36-hour virtual round on January 6-7th, 2024, and continues in person on January 18th, 2024 at Microsoft Toronto, where 8 finalist teams will be invited to present to a panel of industry leaders, which include leading VCs, AI, and sustainability experts. Teams of up to four participants are eligible to participate. Register your interest here.

Stay up to date by following our website for the latest announcements and event details. Should you have any questions or concerns, don’t hesitate to reach out to us at info@genaicompetition.com.

genaicompetition.com
Ben Weinstein (benweinstein2010@gmail.com)
2023-12-15 16:12:11

I know many people in the community use label-studio for machine learning annotations. I've been working on getting model predictions overlaid on a label-studio server all week to create an active learning pipeline. I can make a blogpost if this is something other people need to do, it wasn't trivial. Eventually it will go into DeepForest + Label-studio integration. So on image load, the annotator sees existing boxes with labels and scores, can select a set of them, delete, change the labels, and save as annotations. Message me if you need help with this.

👍 Sara Beery, Dan Morris, Caleb Robinson, David Russell, Shir Bar, Yseult Hb, Thor Veen, Nico Lang, Aakash Gupta, Cameron Trotter, Georgia Atkinson, Emilio Luz-Ricca, Lucia Gordon, Mike Trizna, abdon
👍:skin_tone_5: Prabath Gunawardane
👍:skin_tone_3: Alan Stenhouse
Dan Stowell (dan.stowell@naturalis.nl)
2023-12-18 12:16:51

Postdoc job in computer vision for nature, in Denmark, deadline 3rd Jan: https://international.au.dk/about/profile/vacant-positions/job/postdoc-for-computer-vision-and-deep-learning-applied-to-biodiversity-monitoring

👍 Georgia Atkinson, Justin Kay, Lucia Gordon, Sara Beery
Brandon Hays (brandon.hays@duke.edu)
2023-12-18 13:54:06

Hey folks - long time listener, first time caller!

Does anyone here have leads on automated identification of individual elephants in camera trap photos? I'm an ecology PhD student with limited tech know-how working on elephant ecology/conservation in Thailand. I'd love to be able to get higher accuracy population estimates. I know that there are successful species-level elephant ID algorithms. And I know people have been manually identifying individuals in photos with good accuracy. But that seems less tenable for areas with elephant populations >500 individuals.

Any suggestions appreciated!!!

Sara Beery (sbeery@caltech.edu)
2023-12-18 13:58:05

*Thread Reply:* @Peter Kulits and I have a participatory system that helps collect labeled data with IDs with humans in the loop, if you don't yet have sufficient training data. One of the trickiest things in this space is that there is very little public data (just the ELPephants dataset, which is only 40 individuals and highly curated) due to limitations in data sharing.

🎉 Jon Van Oast
❤️ Jon Van Oast
Sara Beery (sbeery@caltech.edu)
2023-12-18 14:00:05

*Thread Reply:* A qualitative study in our side seems to suggest that re-ID from camera trap data for many species is only possible for a very small portion of the data even by experts, vs. human captured DSLR data

Brandon Hays (brandon.hays@duke.edu)
2023-12-19 09:22:52

*Thread Reply:* Hey @Sara Beery and @Peter Kulits! Hmm, yea data accessibility and image quality are tricky. Did you see this paper that the Plotnik research group published in March? https://peerj.com/articles/15130/#aff-2

They're doing everything by human verification of videos from camera traps and getting reasonable seeming results! I was also thinking that, if you could afford it, some of that difficulty in getting high quality imagery from camera traps could be resolved by doubling cameras per location (similar to leopard folks trying to catch both sides of the leopard).

👍 Sara Beery
Brandon Hays (brandon.hays@duke.edu)
2023-12-19 09:25:31

*Thread Reply:* Though they are also subsetting the images they analyze in that paper, and only using the high quality images. Still I feel like you could leave camera traps out long enough to accrue higher quality images for most of the elephants in an area, given enough time. I think the next step, modeling populations, would be trickier though given that variability in data availability.

Also, I don't have camera traps or images yet, but I bet I could find people with that data willing to share for a machine learning project!

Paul Allin (allinpaul@gmail.com)
2024-01-02 04:37:13

*Thread Reply:* Not sure about indian elephant, but on african elephant we use tears and wrinkles in the ears, or the wrinkle pattern from the forehead to the mouth which is unique for each. This allows us to us images of lesser quality

Brandon Hays (brandon.hays@duke.edu)
2024-01-04 09:01:05

*Thread Reply:* Hi @Paul Allin and @Sara Beery. Thanks for the tips! Could I use that particaptory labelling system that Sara mentioned a little ways down the road? I'm writing a proposal to Wildlabs for money to buy camera traps. Once I have traps, I'm planning to take them to elephant sanctuaries in Thailand. There are hundreds to thousands of captive elephants that free range through forests at least part of the year, with humans that know the elephants very well and can ID them with high confidence. Seems to me like the best possible way to build a training dataset! And I'm writing into the grant that the dataset would be publicly available too!

Paul Allin (allinpaul@gmail.com)
2024-01-04 12:03:56

*Thread Reply:* sounds like a great way to collect data of a large group of elephants. What is the end goal?

Brandon Hays (brandon.hays@duke.edu)
2024-01-04 13:27:50

*Thread Reply:* the end goal is to be able to estimate populations in and around a protected area in eastern Thailand. I'm hoping individual ID will give better results than gridded camera trap arrays using more traditional analyses

Paul Allin (allinpaul@gmail.com)
2024-01-05 08:02:28

*Thread Reply:* Would be good to chat, I have some ideas for something similar for African elephant and perhaps there is some overlap in ML

Brandon Hays (brandon.hays@duke.edu)
2024-01-05 09:10:22

*Thread Reply:* Yea, I'd love to chat! Do you have any free time next Monday or Tuesday?

Paul Allin (allinpaul@gmail.com)
2024-01-05 09:18:04

*Thread Reply:* Yes that should work, what time zone are you on?

Brandon Hays (brandon.hays@duke.edu)
2024-01-05 09:19:17

*Thread Reply:* I'm in EST

Paul Allin (allinpaul@gmail.com)
2024-01-05 09:22:55

*Thread Reply:* Okay I’m GMT+2

Paul Allin (allinpaul@gmail.com)
2024-01-05 09:23:11

*Thread Reply:* Which platform do you want to use?

Brandon Hays (brandon.hays@duke.edu)
2024-01-05 09:28:20

*Thread Reply:* let me shoot you a zoom link

Brandon Hays (brandon.hays@duke.edu)
2024-01-05 09:29:54

*Thread Reply:* Could we 9am my time, 4pm your time?

Brandon Hays (brandon.hays@duke.edu)
2024-01-05 09:30:21

*Thread Reply:* on Monday**

Paul Allin (allinpaul@gmail.com)
2024-01-05 09:41:25

*Thread Reply:* Allinpaul@gmail.com, 4pm is fine with me

Oscar W (omtinez@gmail.com)
2023-12-18 16:13:40

Hi everyone! It's my pleasure to announce that we finally were able to release a dataset containing images of two species of sea stars: https://lila.science/sea-star-re-id-2023/

The paper that goes with it has been approved but needs some minor tweaks before it can be published (hopefully in the upcoming weeks), I'm happy to share the manuscript with anyone interested in the meantime. And of course, happy to answer any questions about the dataset.

LILA BC
Written by
lilawp
Est. reading time
2 minutes
❤️ Timm Haucke, Gracie Ermi, Jason Holmberg (Wild Me), Sara Beery
⭐ Dan Morris, Benjamin Hoffman, Maddie Cusimano, Cameron Trotter, Aran Dasan, Andrew Schulz
🤩 Maddie Cusimano
👍 Aamir Ahmad
Cameron Trotter (cater@bas.ac.uk)
2023-12-19 04:58:48

*Thread Reply:* Hi @Oscar W - I'd be interested in giving the paper a read if you're happy to share

👍 Oscar W
Ana Maria Quintero (quinteroossa37@gmail.com)
2023-12-19 11:07:07

Hi everyone! I hope you are all doing well. I'm Ana María from Colombia :flag_co:, working as a Data Scientist. Last week, I came across this fantastic initiative at NeurIPS, and I'm eager to redirect my academic and professional focus toward the field of conservation. I would love to participate as an intern or volunteer in any available projects to gain experience and knowledge in conservation, particularly if they are based in Colombia or Latin America in general. This aligns with my goal as I prepare to apply to academic programs next year. Thank you, and Happy Holidays! :)

❤️ Suzanne Stathatos, Catherine, Piotr Tynecki, Carly Batist, Arjun Subramonian (they/them), Jon Van Oast, Sara Beery, Andy Viet Huynh, Gustavo Perez, Talia Speaker, Shir Bar, Andrew Schulz, Iván Higuera-Mendieta, Aaron Ferber, Robin Zbinden, Gabriel Manso, Alan Stenhouse
Ben Weinstein (benweinstein2010@gmail.com)
2023-12-19 12:20:20

*Thread Reply:* Hi Ana, I work with a number of Colombian and Ecuadorian teams. If you are near Medellin, highly recommend check out Juan Parra's lab at Antioquia. Happy to chat further if helpful.

Ana Maria Quintero (quinteroossa37@gmail.com)
2023-12-19 12:33:48

*Thread Reply:* Amazing! Thank you so much, I will contact him

Jose Ruiz-Munoz (jfruizmu@unal.edu.co)
2023-12-20 11:25:40

*Thread Reply:* Hi Ana, I am a faculty member at Universidad Nacional de Colombia. My areas of interest include machine learning and data analysis, with a focus on their applications in environmental monitoring. Please feel free to direct message me if you are interested in discussing potential projects

🙌 Ana Maria Quintero
Aaron Ferber (aferber@usc.edu)
2023-12-20 11:53:58

*Thread Reply:* Also just wanted to say that I worked with Ana co-organizing the NeurIPS 2023 Latinx in AI workshop and she was extremely on top of things, eager to push projects forward on her own initiative, had great insights about the different research directions people were working on. Overall the social event we organized had around ~450 people registered which I think says something about her ability to manage large projects :)

❤️ Ana Maria Quintero, Tiziana Gelmi Candusso
🙌 Ana Maria Quintero
Atriya Sen (atriya@atriyasen.com)
2023-12-19 17:56:26

Hello all, I'm a computer science assistant professor at the U of New Orleans (from 2024, Oklahoma State U). I'm soliciting interest in joining us in a collaborative proposal with Queen's University in Belfast, from biologists in the US who are interested in the high-conservation-value demersal elasmobranchs; species that lay eggs (mermaids purses) and that utilize egg laying sites: for example large Skate species in the genera Dipturus, Bathyraja, etc, such as the Barndoor Skate (Dipturus laevis). Thank you.

Magali Frauendorf (magali.frauendorf@slu.se)
2023-12-21 06:15:22

For those interested: the Nordic Society Oikos conference in Lund (Sweden), March 12-15 2024, https://nordicsocietyoikos.glueup.com/event/nordic-oikos-2024-80737/, will have a thematic session on computer vision in ecology and evolution https://nordicsocietyoikos.glueup.com/event/nordic-oikos-2024-80737/thematic-sessions.html. The extended deadline for abstract submission is 12th January 2024!

Glue Up
Glue Up
👍 Piotr Tynecki, takatomi-k, Sara Si-Moussi, Martin Marzidovsek, Tiziana Gelmi Candusso
❤️ Suzanne Stathatos, Shir Bar
Suzanne Stathatos (suzanne.stathatos@gmail.com)
2023-12-21 11:05:35

*Thread Reply:* @Oskar Åström 👀

Burooj Ghani (buroojghani@gmail.com)
2023-12-21 11:28:26

Pleased to share this work with y'all. Enjoy the read! 🙂

https://www.nature.com/articles/s41598-023-49989-z

Nature
👍 gvanhorn, Suzanne Stathatos, Georgia Atkinson, Benjamin Hoffman, Dan Morris, Timm Haucke, Ana Maria Quintero, Jason Holmberg (Wild Me), Justin Kitzes, Ghazi Randhawa, Chris Lange
🐦 Holger Klinck, Dan Morris, Taiki Sakai - NOAA Affiliate, Yseult Hb, Jason Holmberg (Wild Me), Takumi Sato, Namitha Suresh, Timm Haucke, Alan Stenhouse
😊 Maddie Cusimano, Jason Holmberg (Wild Me)
🙌 Ben Williams
Dan Morris (agentmorris@gmail.com)
2023-12-22 19:12:21

A few weeks ago @Ian Ingram and I had a chat on this Slack that went something like this (paraphrasing):

__

Ian: "Is there a model zoo for camera trap models where we can publish our models and try other people's models?"

Me: Model zoos are hard, blah blah, compatibility, blah blah, general tone of discouragement and doom.

Ian: "Well what if it was more like just a list of models and instead of a 'model zoo', we called it a 'model wilderness'?"

Me: Oh yes, brilliant then, I like lists and I'm going to start saying "model wilderness".

--

So... I finally got around to collecting a list of all the models I'm aware of for camera trap data, in the sense of publicly-available models you can download as model weight files. I.e., this particular list does not include models that exist only in specific platforms, unless the weights are downloadable.

https://agentmorris.github.io/camera-trap-ml-survey/#publicly-available-ml-models-for-camera-traps

I'm not sure if that's what a "model wilderness" is, but it's a start. I'm not trying to make something particularly structured, but it's not that far from a structured collection of models with automatic download from the sources, sample code, some model-card-like information, etc. If someone wants to formalize that and put together code to actually run all those classifiers, that would not be a ton of work, and it would start to be somewhere between a "model wilderness" and a "model zoo". A "model park" maybe?

Let me know what's missing from that list!

🙌 Casey Clifton, Suzanne Stathatos, David Russell, Taiki Sakai - NOAA Affiliate, Aakash Gupta, Sara Beery, Mitch Fennell, Shir Bar, Gaspard Dussert, Felipe Parodi, Mohamed Belja, Malte Pedersen, Carly Batist, Giacomo May, Matthias Zuerl, Aran Dasan, Elizabeth Campolongo, Jason Holmberg (Wild Me), Ștefan Istrate, Mike Trizna, Timm Haucke, Joseph Dimos, Edward Bayes, Catarina Silva, Talia Speaker, Rebecca Wilks, Tiziana Gelmi Candusso, Olivier Dietrich, Lauren Harrell
🎄 Aakash Gupta, Jason Holmberg (Wild Me), Sam Lapp
🙌:skin_tone_5: Prabath Gunawardane
😎 Jon Van Oast, Elizabeth Campolongo, Jason Holmberg (Wild Me), Barry Brook, Joseph Dimos
👍 Thor Veen, Piotr Tynecki, Vincent Christlein, Ed Miller, Joseph Dimos, mimi
💯 Gaspard Dussert, Valentin Gabeff, Chris Yeh, Tiziana Gelmi Candusso
👍:skin_tone_3: Pen-Yuan Hsing
👍:skin_tone_2: Cara Appel
🙌:skin_tone_3: Alan Stenhouse
❤️ Alan Stenhouse
🦏 Ian Ingram
Aakash Gupta (aakash@thinkevolveconsulting.com)
2023-12-22 20:30:46

*Thread Reply:* Hi Dan you could also probably add the EU Rewilding project. Model weights are uploaded on HF

• HF Spaces https://huggingface.co/spaces/skylord/European-ReWilding-Demo-Yolov8

• Model weights https://huggingface.co/skylord/ReWilding-Europe-Yolov8

huggingface.co
huggingface.co
Dan Morris (agentmorris@gmail.com)
2023-12-22 20:51:41

*Thread Reply:* Added, thanks!

I tested it on an image with deer from my backyard, it did the right thing.

❤️ Jon Van Oast
🦌 Mitch Fennell, Shir Bar, Aakash Gupta
🎄 Aakash Gupta
Piotr Tynecki (piotr@tynecki.pl)
2024-01-08 02:06:40

*Thread Reply:* @Dan Morris feel free to update the list of the models for mammals with the paper:

Recognizing European mammals and birds in camera trap images using convolutional neural networks

Two models based on EfficientNetV2 and ConvNextBase + test set are here: https://data.uni-marburg.de/handle/dataumr/246 https://github.com/umr-ds/Marburg-Camera-Traps

Paper: https://inf-cv.uni-jena.de/wordpress/wp-content/uploads/2023/09/Talk-8-Daniel-Schneider.pdf

Dan Morris (agentmorris@gmail.com)
2024-01-08 16:30:45

*Thread Reply:* @Piotr Tynecki Done, thanks! Sample code looks very straightforward. Also added your dataset to this list.

Piotr Tynecki (piotr@tynecki.pl)
2024-01-09 03:38:48

*Thread Reply:* (it’s not my paper/model/code but thanks!)

Sam Lapp (sam.lapp@pitt.edu)
2024-01-12 18:41:51

*Thread Reply:* On the bioacoustics side of model zoos/parks/wildernesses, we’ve been developing the “Bioacaoustics Model Zoo” here. It’s admittedly OpenSoundscape-focused, but it does include APIs for generating predictions or embeddings with Perch and BirdNET. In addition to adding more models to the zoo, the section of the ReadMe “Other automated detection tools for bioacoustics” could be expanded to fit the spirit of “model wilderness” - I’d be happy to work or collaborate on that effort (or port it to another location) if @Dan Morris or others think such a list is needed on the acoustics side of thiings

🙌 Suzanne Stathatos, Matt Weldy, Carly Batist, Dan Morris, Shir Bar, Mitch Fennell, Enis Berk Çoban, Sara Beery, Andrew Schulz, Thijs van der Plas, Anton Alvarez, Ben Williams, Lauren Harrell
😎 Jon Van Oast, Maddie Cusimano, Tiziana Gelmi Candusso
👀 Alba Márquez-Rodríguez, Meredith Palmer
Dan Morris (agentmorris@gmail.com)
2024-01-12 20:51:30

*Thread Reply:* @Sam Lapp Yes I really like your model zoo! You did not just the easy version, but the proper model zoo that I claimed earlier on this thread is very hard. 🙂 Coincidentally I played with your model zoo for the first time last week, and made my first PR to your repo. Worlds collide!

👍 Sam Lapp
Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2023-12-29 16:27:42

Hi everyone! Continuing our trend from last year, this time also we have created a 2023 Year in Review Playlist of our group's work (Flight Robotics and Perception Group at Uni Stuttgart). A large part of it is likely of interest for this slack workspace. Enjoy it here : https://www.youtube.com/playlist?list=PLZM-Zi7aahIvGecEUjNafPWhb2pYuwBVO

YouTube
😎 Timm Haucke, Alen Lin
👍:skin_tone_3: Alan Stenhouse
Alen Lin (alenlin2752@gmail.com)
2023-12-30 06:04:46

Hi everyone, I'm Alen from Taiwan! Although I'm a high school student, I have a deep passion for birds (and any and all things related to them 🦉), really excited to join this slack feed. I'm currently working on a project which utilizes computer vision to track migratory patterns of birds and analyze the impact of climate change on annual migration (alongside some other small ideas!) and am deeply interested in participating in larger research or volunteer programs, particularly over the summer. I would love to pick your collective brains on what's available to a motivated high school student like myself, or if there are any organizations you recommend I reach out to. Thank you in advance, and I look forward to reading about all of the papers, workshops and competitions you've shared in here :dad_parrot:

🎉 Kai Hung, Dan Morris, Takumi Sato, Isabella, Alan Stenhouse
👋 Viktor Domazetoski, Mitchell Rogers, Shir Bar, Sara Beery, Edward Bayes, Talia Speaker, Priscilla Ye
Dan Morris (agentmorris@gmail.com)
2023-12-31 19:05:14

*Thread Reply:* Wow, in high school and already so motivated... when I was in high school, I think my biggest achievement was beating Final Fantasy (the NES version, in case anyone else here is old enough to make that distinction) in one continuous, pizza-fueled session. And here you are training AI models to analyze the impact of climate change on bird migration. Kudos.

And welcome to this community! If you're looking to "meet" (in the 2023 sense of the word) more folks doing this kind of work, in addition to hanging out on this Slack, I recommend hanging out at as many WILDLABS events as you can; many of us are big fans of the Variety Hour series as a way to get a feel for what's going on in this field that's as close to in-person as one could get without teleporting everyone on this Slack into one room:

https://wildlabs.net/article/variety-hour-2023-lineup

❤️ Talia Speaker
Dan Morris (agentmorris@gmail.com)
2023-12-31 13:33:08

A couple posts ago I shared a link to a list of publicly-available models for camera trap images, I am also trying to put together an analogous list of models for detecting fish in underwater images/video (same criteria... listing downloadable model weights):

https://github.com/agentmorris/agentmorrispublic/blob/main/fish-datasets.md#publicly-available-models-for-fish-detection

Let me know if I'm missing stuff there too?

👍 Justin Kay, Jason Holmberg (Wild Me), Edward Bayes, Thor Veen, Sara Beery, Luke Sheneman, Catarina Silva, Levi Cai
🙌 Anna Willoughby, Jason Holmberg (Wild Me), Shir Bar, Sara Beery, Carly Batist, Kalindi Fonda
🎉 Jon Van Oast, Sara Beery, Tiziana Gelmi Candusso
🐟 Malte Pedersen, Aran Dasan, Shir Bar, Sara Beery, Meredith Palmer
👍:skin_tone_3: Alan Stenhouse
Edward Bayes (bayesbayes@gmail.com)
2024-01-05 11:14:41

*Thread Reply:* Love this recent work compiling these lists Dan, super useful! Do you know of any similar lists for birds (from terrestrial, not aerial, images), insects, plants, or reptiles/amphibians? Or know of any evals of how MD performs for these (I imagine it’s only really performant on birds given the training dataset)?

If these don’t exist, I'm doing some research in this area and happy to compile a first draft to add to the ‘wilderness’.

Dan Morris (agentmorris@gmail.com)
2024-01-05 14:17:42

*Thread Reply:* Sorry, I don't know of similar lists for other terrestrial modalities/taxa.

I don't know of formal evaluations of MD on birds and reptiles, though subjectively I would say for birds, MD works almost all well as it does for mammals, but not quite, and for reptiles and amphibians, it can be anywhere from "excellent" to "a total catastrophe".

❤️ Edward Bayes
Edward Bayes (bayesbayes@gmail.com)
2024-01-05 15:05:00

*Thread Reply:* "anywhere from "excellent" to "a total catastrophe"." - love it! 😂

Edward Bayes (bayesbayes@gmail.com)
2024-01-05 15:05:06

*Thread Reply:* And thanks so much! This is really useful

Sam Lapp (sam.lapp@pitt.edu)
2024-01-02 08:25:30

Job opportunity: KAUAI AVIAN RESEARCH/MANAGEMENT COORDINATOR Deadline to apply: extended to Jan 5 (soon!) Reposting from an email from project coordinator Cali Crampton (I’ll post the description in a reply thread because its long)

❤️ Sara Beery, Suzanne Stathatos
Sam Lapp (sam.lapp@pitt.edu)
2024-01-02 08:25:50

*Thread Reply:* KAUAI AVIAN RESEARCH/MANAGEMENT COORDINATOR – ID# 223806. CLOSING DATE: January 5, 2024. INQUIRIES: Lisa Crampton (Kauai). Regular, Full-Time, RCUH Non-Civil Service position with Pacific Cooperative Studies Unit (PCSU), Kauai Forest Bird Recovery Project (KFBRP) located in Hanapepe, Kauai. Continuation of employment is dependent upon program/operational needs, satisfactory work performance, availability of funds, and compliance with applicable Federal/State laws. MONTHLY SALARY: $5,140/Mon. DUTIES: Leads planning, organization, and implementation of research, monitoring, and management projects on Kauai for the benefit of native forest birds, with particular focus on threatened and endangered forest bird species. Performs and coordinates all project activities including research, conservation management, field work and logistics, budget tracking, regulatory compliance. Liaises with partners to facilitate research activities, ex situ conservation, control of introduced species and/or habitat restoration in study areas. Recruits and provides oversight, instruction, and guidance for a team of up to ten (10) members (including staff, interns, and volunteers). Produces annual work plans and reports for submission. Ensures that proper environmental compliance documentation is prepared for all projects and that all permits and regulatory approvals and permits are obtained. Coordinates and performs analysis of field data with guidance from Program Manager. With Program Manager, develops proposal budgets and writes grant proposals. Carries out routine awareness-raising events to ensure ongoing project funding. Initiates and conducts public outreach efforts, working with media, community organizations, civic leaders, and individuals through an effective program using personal contact, media briefings, brochures, press releases, presentations, and public service announcements. May travel to other islands for site visits and fieldwork. Drives to project activities and field work locations. PRIMARY QUALIFICATIONS: EDUCATION Master’s Degree from an accredited college or university in Biological Sciences, Natural Resource Management, Biological Conservation, or related field. (Bachelor’s Degree from an accredited four (4) year college or university and at least two (2) years of independent research in conservation of endangered avian species may substitute for a Master’s Degree.). EXPERIENCE Three to five (3-5) years of experience conducting biological research, ornithology, invasive species management, or conservation management. Experience includes one-three (1-3) years of experience mist-netting, banding and taking blood samples from passerines; conducting surveys of plants and animals; and locating and monitoring bird nests. Experience includes one to two (1-2) years of supervisory or team oversight/leading experience. KNOWLEDGE Detailed knowledge of the principles and techniques of conservation management, remote field operations, and avian species biology. Working knowledge of natural history relevant to native Hawaiian wildlife, or similar environments. Working knowledge of native Hawaiian ecosystems, native Hawaiian plants and wildlife, or similar environments. Detailed knowledge of techniques used to inventory and monitor wildlife, and other natural resource assets, including experimental design and data analysis. Working knowledge of rules and regulations relating to field operations, and pertinent laws, regulations, licensing and permitting requirements related to program. Knowledge of management principles including, but not limited to, supervising/developing employees, EEO, workplace safety, corrective/disciplinary actions, and administration of policies and procedures. ABILITIES & SKILLS Strong organizational ability to plan, lead, and execute logistically complex field operations. Strong ability to solve logistical problems and innovate solutions related to biological threats. Ability to lead a field crew and work as a team member for safe and efficient field operations. Ability to maintain a positive professional attitude in support of a productive work environment. Must have excellent communication and program management skills. Ability to perform and coordinate data analysis and present findings and recommendations in written report format. Ability to estimate costs associated with complex projects and to project budget requirements for future program needs. Proficient use of Microsoft Office word processing and spreadsheet programs, databases (e.g. Access), GIS (e.g., AGOL), and statistical software (e.g. R or SAS). Must possess a valid driver’s license (and if use of personal vehicle on the job is required, must also have valid personal driver’s insurance equivalent to Hawai’i’s No-Fault Driver’s Insurance) and maintain throughout the duration of employment. Post Offer/Employment Conditions: Must possess the American Red Cross Certification in First Aid/CPR. Must be able to complete basic helicopter safety course (A100) and external sling load (A219) course within twelve (12) months from date of hire and become certified as a PCSU Helicopter Manager when training is offered. Must be able to complete chainsaw training within twelve (12) months from date of hire and maintain throughout duration of employment. Must be able to qualify for DOFAW’s Bird Banding Laboratory (BBL) subpermit to band and take blood from threatened and endangered passerines. Must be able to complete the UH Information Security Awareness Training (ISAT) within two (2) weeks from date of hire, and re-certify every twelve (12) months. PHYSICAL/MEDICAL DEMANDS Must be able to conduct fieldwork in dense vegetation and remote areas under difficult conditions (e.g., heat, rain, cold temperatures, poor footing). Must be able to hike over difficult terrain and long distances of eight to ten (8-10) miles with a backpack weighing up to forty (40) pounds unassisted. Must be able to work around and fly in approved helicopters. POLICY/REGULATORY REQUIREMENT As a condition of employment, employee will be subject to all applicable RCUH policies, procedures, and trainings and, as applicable, subject to University of Hawai’i’s and/or business entity’s policies, procedures, and trainings. Violation of RCUH’s, UH’s, or business entity’s policies and/or procedures or applicable State or Federal laws and/or regulations may lead to disciplinary action (including, but not limited to possible termination of employment, personal fines, civil and/or criminal penalties, etc.). SECONDARY QUALIFICATIONS: Management, public relations, and administration skills. Knowledge of funding sources in Hawai’i and nationally. Proven grant writing and fund-raising abilities. Demonstrated ability in publishing peer-reviewed scientific papers. Experience coordinating with land managers in Hawai’i. Familiarity with the Alakai Wilderness Area, Kauai and its native birds. Expertise in database design and management. Experience in reintroduction or translocation, or founding of captive populations of passerine birds. Aviculture skills, including husbandry and fluid administration. Experience monitoring animal movements using radio-tracking. APPLICATION REQUIREMENTS: Please go to www.rcuh.com and click on “Job Postings.” You must submit the following documents online to be considered for the position: 1) Cover Letter, 2) Resume, 3) Supervisory References, 4) Copy of Degree(s)/Transcript(s)/Certificate(s). All online applications must be submitted/received by the closing date (11:59 P.M. Hawai’i Standard Time/RCUH receipt time) as stated on the job posting. If you do not have access to our system and the closing date is imminent, you may send additional documents to rcuh_recruitment@rcuh.com. If you have questions on the application process and/or need assistance, please call . Please visit https://www.rcuh.com/document-library/3-000/benefits/rcuh-benefits-at-a-glance/ for more information on RCUH’s Benefits for eligible employees.

RCUH’s mission is to support and enhance research, development and training in Hawai’i, with a focus on the University of Hawai’i.

We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.

Dr. Lisa “Cali” Crampton Program Manager Kauai Forest Bird Recovery Project PO Box 27 (USPS mail) or 3751 Hanapepe Rd (courier packages) Hanapepe HI 96716

Ghazi Randhawa (muhammadghazirandhawa@gmail.com)
2024-01-02 18:39:43

*Thread Reply:* @Sidhika

Patrick Beukema (patrickb@allenai.org)
2024-01-02 20:15:10

I know that last year was hard on the planet -- but there was also a lot of innovation and progress in environmental AI and I was inspired by the excellent talks at NeurIPS (especially Sara, David, and Zaira's, just to name a few). I wrote a short recap of some key takeaways in AI4Earth throughout the year. This is not exhaustive by any means, and it is certainly opinionated, and of course any feedback is welcome. https://blog.allenai.org/ai-for-earth-2023-in-review-49a27cb731a8

Medium
Reading time
7 min read
❤️ Suzanne Stathatos, Sara Beery, Ted Schmitt, Shir Bar, Mike Gartner, Konstantin Klemmer, Edward Bayes, Shravan Ambudkar, Omiros Pantazis, Devis Tuia, Robin Zbinden, Chris Llorca, Joseph Dimos, Vinicius Amaral, Alexander Merdian-Tarko, Alan Stenhouse, Chris Lange
💕 Jon Van Oast
Patrick Beukema (patrickb@allenai.org)
2024-01-02 20:17:25

*Thread Reply:* I wrote this somewhat spontaneously and last moment and wanted it to get out there before the new year -- but I think it would have been much better had it been a more collaborative/jointly written especially to offer more diverse perspectives.

Devis Tuia (devis.tuia@epfl.ch)
2024-01-03 05:27:28

*Thread Reply:* THanks @Patrick Beukema! It is important to keep an optimistic drive and your recap is a great read to start the year with a lot of positive energy!

❤️ Patrick Beukema
Ben Weinstein (benweinstein2010@gmail.com)
2024-01-03 13:05:31

I get 1-3 requests for review a week at this point. I'm considering a blanket approach that if the journal doesn't require data availability, then I won't review. I encourage others to join me. I'm trying to decide how stringent and reasonable the requirement should be. I think my minimum is that if within 2 min I can find a paper I'd want to read in the latest issue, but the data "Is available on request", I won't review. Use that and the guide for authors as an indicator. As a survey in my inbox I have a request from Ecological Indicators, Biogeosciences, International Journal of Applied Earth Observation and Geoinformation. Ecological Indicators has the typical weak data availability statement (https://www.elsevier.com/researcher/author/tools-and-resources/research-data/data-statement), and I found an interesting paper within 15 https://pdf.sciencedirectassets.com/272637/1-s2.0-S1569843223X00107/1-s2.0-S1569843223004417/main.pdf?X-Amz-Security-Token=IQoJb3JpZ2luX2VjEMr%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJHMEUCIQCeq8rVfsuD6kwwap4Ean3c0FY3meO2LFH5h3VNDGTmrwIgeJRmVgOoYa%2FqQNAuNXTRsadLxrrvngtSpvhg01YUaQIqsgUIYhAFGgwwNTkwMDM1NDY4NjUiDEAf6oTFFVTCaYfxCSqPBRC4hAI2d76ydyauTgLwx4Pp%2FYoxeYRP6loc2AGgUNYMGv3kIH%2FPdOIiJtmj1XhOuX446yOAU19QvMj1RRHKr3fCLemamb%2FbbOQ7%2BHmwY%2BR4MctxGMYRr1QV0H%2BMp7EKjgR6e4%2FhWmZ8KlEYFvvK9mnAIPLpIhF%2FTJyKsdX92tST%2BZk7kkKPLY45q9%2FpR3NwvQGv0T73qRvgYKmZOBr3gKC%2BEpddks8WEKJsJER3ROxAcLk9e%2BUljBapaBZS%2FZ3p4YrM7yQCCgZXfaD0ARpoChlDH0KptPOZ2O2WSMWzZK72I%2Bm%2BElYwBluP%2Fai9Lg7ZV2vnxoLX1VEoMHN3%2Fes%2FaSsUTHVVckUsjnNfAGPf7NAZ2bxUa5SgyEpsnbGss6tSpdUhIPini4NZzlewKtg6FizaZR1oi8WBpnrhIhF9OHQGFLcRwe0SPYbyvdiEcwcPcaRsrb4yD5aFs%2BXPY7Q90H1SMAQ3dUM7iJOj%2F4BOVJq7DaYb0m30zbdeiMLezfpnvav87bsItHLU1mD%2BTnY6ZlcFrVA%2F0sd0591Fm%2FHqyLjAVsyESm7gDxzERvTn9b%2F7ZNlnvsMHJR0ggkPJ1e%2FjlM%2FRpVX1v6P70vgYgB9p8uFLKCEMLw28Hbx1dphGyLP0Bbtayjcu6kqpQRWgAfFL%2F8pmpqcQEnMx0771mKRMa%2BYkqnuYN%2BBu%2BbUaDEBJHFR94uFiZhcPGRvxFzsYxdObl8%2Ba2vr8GEiT4mjwoNFAEoTiDcyUbrx1d0wdwbnese%2F95M%2FPD7OM%2BNyXSkHEXmHEf8D%2BuW55L1Qn3xLHp6LASweosb7zK0umAjJrVDVgUb1GPg%2FODODmn0IeBGcphF6cSeaaisIZim6GjJ4EXBPCpK4wsKzWrAY6sQH0jHx7chb%2F7WUXKyvpXrgXT0Wab%2BGhz%2FDUqLC8RczuILfIFeES%2FC8cnPB0Z4EUwAFUirysNIiDgAC2mLcEqy0Ri8YFvOrRni6HlAWAN0EKARkmvwaRldRJY7kBqHrSnBX9lLkNiccvoaIoDfkEyFWghQ43Nq4MlTVy%2FdHNSr9Dnxeh1NHrP7kajCupqYZHVVaqeA3tSIIVlx6hUnYwBpshTwBAJI%2F3blSkY3KECI%2Fs8A8%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20240103T175754Z&X-Amz-SignedHeaders=host&X-Amz-Expires=300&X-Amz-Credential=ASIAQ3PHCVTYVDTMZKDA%2F20240103%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=bf2c855ca6f7e0e03b1d16ee7616c8d58ca58234ac089eda3a59e84691c3e713&hash=e25bf956b2d66971b4143a387f007733879bec598becce1e45a7aec94ec08ed4&host=68042c943591013ac2b2430a89b270f6af2c76d8dfd086a07176afe7c76c2c61&pii=S1569843223004417&tid=spdf-101f8213-f738-4c37-8ada-35d5d0f696d0&sid=1a0126d47157b040387ab0d29247e2cf0c19gxrqa&type=client&tsoh=d3d3LnNjaWVuY2VkaXJlY3QuY29t&ua=17105d520005545e54&rr=83fd20774857ef98&cc=us|seconds. The other journals appear equally vague. Are there other criteria I should use? Is this fair? Does it bias towards more expensive journals and have any other unintended consequences? Of course there are legal and logistical reasons not to share data, but those are the vast minority of cases. A reasonable explanation in the data availability section is usually sufficient in those cases. FYI, credit to Methods in Ecology and Evolution, which has probably the most comprehensive statement (https://besjournals.onlinelibrary.wiley.com/hub/editorial-policies) Data are important products of the scientific enterprise, and they should be preserved and usable for decades in the future. The British Ecological Society thus requires, as a condition for publication, that all data supporting the results in papers published in its journals are archived in an appropriate public archive offering open access and guaranteed preservation. For theoretical papers the underlying model code must be archived.

sciencedirect.com
🎉 Jon Van Oast, Ariel Chamberlain, Sara Beery, David Russell, Jason Holmberg (Wild Me)
👏 Leonardo Viotti, Enis Berk Çoban, Dan Morris, Rowan Converse, Jason Holmberg (Wild Me), Martin Marzidovsek
Caleb Robinson (calebrob6@gmail.com)
2024-01-03 16:05:21

*Thread Reply:* I think this is reasonable -- (lack of) dataset availability came up several times for me while reviewing for NeurIPS Datasets and Benchmarks and AAAI AI for Social Impact tracks. It is impossible to properly review some papers without data availability.

Dan Morris (agentmorris@gmail.com)
2024-01-03 21:47:08

*Thread Reply:* I generally agree with what Ben says (everyone is almost certainly tired of hearing me shout about public data on this Slack), but I have a slightly softer approach as a reviewer...

  1. If there's a good reason that data can't be released, and it's either mentioned specifically or it's self-evident, I don't totally ignore the issue as a reviewer, but I look at it as slightly narrowing the audience that this paper might impact and slightly raising the bar for reproducibility. There are a zillion other things that make an audience narrow or broad, or make it more/less difficult to reproduce a result, and this goes in the pile somewhere, not even really as a strike against the paper, more as a bonus point they didn't get from me. The most common reason I see is licensing of commercial satellite imagery... but that's also an example of where the models/methods are still useful to other people, because if a paper says "we used WorldView-3 imagery but we can't release it", readers can still purchase WV-3 imagery and make use of the methods/models from the paper. I.e., this is really different than saying "we built a totally custom hyperspectral drone you'll never see and here are our results that you can never verify". I think we don't want to create a culture where no one can ever write papers about AI methods for commercial satellite imagery. Ditto for papers that use data that came from sources that don't allow public release; in the camera trap space, for example, requiring data release would in practice bias significantly against writing papers about data from indigenous land, or from whole countries (e.g. India) where data release is hard or impossible, or at least much harder than it is in the US.
  2. If a paper doesn't release the data and doesn't really say anything about why, that's a more significant strike from me, because I agree with your inclination toward encouraging data release, but it's still not a dealbreaker, just an imperfection. If a paper is otherwise awesome, with awesome open-source code and/or public models and/or methods that are obviously a good idea, I might still give an above-the-rejection-line rating.
  3. If the paper says that part of their contribution is that the data is publicly available, and I click the link and it's obviously not really available, now I get really cranky as a reviewer. As often as not it's just carelessness, and maybe they intend to really release the data, but IMO you can't have it both ways. Same for the case where data is "publicly available" but I have to log in to access it during the anonymous review process. Grrrr.
Ben Weinstein (benweinstein2010@gmail.com)
2024-01-03 23:30:57

*Thread Reply:* I totally agree with these guidelines for once you accept the review. I'm just talking about trying to choose among the many review requests we are all getting. These are all good points. Usually you just get an abstract, so you can't even tell any of these things yet.

Drea Burbank (drea@savimbo.com)
2024-01-06 15:08:50

Hey guys, really bad at checking this channel but FYI our Indigenous-led biodiversity credit methodology is in open review now and will be the first certified biodiversity credit in the world ~ Feb. If you’re a biodiversity nerd like me please consider giving us a .

isbm.savimbo.com
😍 Eric Greenlee, Ted Schmitt, Tiziana Gelmi Candusso
Aran Dasan (aran@sntech.co.uk)
2024-01-07 13:10:10

Sadly we’re not eligible for this one (marine monitoring is out of scope), but here’s a new funding call from InnovateUK:

https://apply-for-innovation-funding.service.gov.uk/competition/1838/overview/8e7ae74c-af9d-4c8b-8bee-084865a57276#summary

> Defra and Innovate UK will invest up to £5 million in collaborative innovation projects that develop environmental monitoring solutions. > Projects must develop new or repurpose existing sensor systems and capabilities, for example observation systems, sensor or sampler carrying platforms or modelling systems and focus on one or more of the following challenge areas: > - biodiversity and natural capital > - soil health (including measuring soil carbon) > - water quality > - greenhouse gas (GHG) and ammonia emissions from Defra sectors

apply-for-innovation-funding.service.gov.uk
👀 Dan Watson
❤️ Dan Watson, Pen-Yuan Hsing
Dan Morris (agentmorris@gmail.com)
2024-01-07 20:17:12

New dataset on LILA, courtesy of New Zealand DOC:

https://lila.science/datasets/nz-trailcams

"This data set contains approximately 2.5 million camera trap images from various projects across New Zealand. These projects were run by various organizations and took place in a diverse range of habitats using a variety of trail camera brands/models. Most images have been labeled by project staff and then verified by volunteers. Labels are provided for 97 categories, primarily at the species level. For example, the most common labels are mouse (49% of images), possum (6.7%), and rat (5.5%). No empty images are provided, but some can be made available upon request. "

But also I'm going to remind everyone in a reply to this post why all the camera trap datasets on LILA are just a figment of your imagination.

:flag_nz: Mitchell Rogers, Alan Stenhouse, Isabella
🐧 Edward Bayes, Carly Batist, Chris Yeh
👍 Piotr Tynecki, Meredith Palmer, Joris Tinnemans, David Will, Luke Sheneman, Sam Lapp
😎 Jon Van Oast
Dan Morris (agentmorris@gmail.com)
2024-01-07 20:21:53

*Thread Reply:* As promised, let me convince you that camera trap datasets don't exist on LILA.

That is, camera trap "datasets" still exist on LILA, but I encourage folks to think of them as arbitrary divisions within the one big dataset that is "LILA camera trap stuff". E.g., if you want images of coyotes, you probably don't care whether they came from the "Idaho Camera Traps" dataset or the "Caltech Camera Traps" dataset.

So for most ML training applications, rather than dealing with individual data sets, consider working with the Really Big CSV File that contains all the camera trap images on LILA, mapped into a common taxonomy:

https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/

The Really Big CSV File has also been imported as a Hugging Face dataset:

https://huggingface.co/datasets/society-ethics/lila_camera_traps

🔥 Elizabeth Campolongo, Carly Batist, Meredith Palmer, Dan Stowell
Dan Morris (agentmorris@gmail.com)
2024-01-07 20:23:08

*Thread Reply:* Also, if you're wondering "how many images did Dan look at to find the penguin doing the penguin-i-est thing a penguin could be doing for that thumbnail?", the answer is... maybe 5,000? Worth it.

😂 Mitchell Rogers, Jason Holmberg (Wild Me), Shir Bar, Alan Stenhouse, Yseult Hb, Alan Papalia, Elizabeth Campolongo, Carly Batist, Valentin Gabeff, Neha Hulkund
😆 Aakash Gupta, Rachael Laidlaw
🌊 Kalindi Fonda
Dan Morris (agentmorris@gmail.com)
2024-01-10 13:20:10

*Thread Reply:* I also see that @Joris Tinnemans, who did all the hard work to collect and organize this dataset, has joined this Slack... welcome Joris, and thank you for your contribution to LILA!

Emily Lines (erl27@cam.ac.uk)
2024-01-08 10:16:15

A call to academic researchers: some of my colleagues in Computer Science, University of Cambridge have developed a Declaration on Academic Response to the Planetary Crisis: Declaration Text

If you would like to sign it, please add your name here: https://docs.google.com/document/d/1TCrQIIDsJAI6JvOuDOvlzldkrEwhmtCoHOxcy1HlGsc/edit

charlotte (deshchang@gmail.com)
2024-01-08 10:26:49

Hi folks! If you’re using any form of natural language processing to analyze problems related to conservation, I wanted to share an opportunity to present your work at the North American Congress for Conservation Biology (NACCB, Vancouver, June 23-28 2024). @Amrita Gupta and I are co-organizing a symposium entitled “Analyzing text data for conservation problems and action”. • We have 1-2 more 15 minute presentation slots available. • If you’re interested and would like to learn more, please get in touch with me ASAP via DM (deadline: January 20). • I’d also be happy to share more information with you if you know folks that I should contact. • We’re especially interested in research on the following topics: ◦ 1) the barriers to using technology for conservationists, and ◦ 2) additional real-world case studies using NLP such as tracking discourse or attitudes toward species or other environmental entities, analyzing reports of wildlife harvesting, or corporate disclosures for environmental impact (though this latter list is certainly non-exhaustive).

👍 Carly Batist, Joseph Dimos, Suzanne Stathatos, Meredith Palmer, Jon Van Oast, Jason Holmberg (Wild Me), Dan Morris
👍:skin_tone_3: Alan Stenhouse
Abhay (abhaykash12@gmail.com)
2024-01-08 14:44:14

*Thread Reply:* cc: @Peter Bull (spoke to him recently about this space and he wanted to learn more given his recent relevant work)

💯 charlotte
❤️ charlotte
Dan Morris (agentmorris@gmail.com)
2024-01-16 19:23:10

*Thread Reply:* Not relevant to your question, but I didn't realize until you mentioned it on this thread that NACCB is in Vancouver this year. I never go anywhere, and I consider Vancouver not going anywhere (from Seattle), so I'm in! Excited to meet folks at NACCB. Thanks @charlotte for pointing that out!

💯 charlotte
👋 Amrita Gupta
charlotte (deshchang@gmail.com)
2024-01-17 14:02:03

*Thread Reply:* Thanks @Dan Morris and glad to hear! NACCB is a great venue for meeting folks in the policy space as well — the previous NACCB meeting featured a large “America the Beautiful” (aka 30x30) US eNGO meeting where scientists and policy leaders in these orgs were triangulating their strategies for landscape prioritization, communicating to policymakers, etc.

:flag_ca: Dan Morris
Tiziana Gelmi Candusso (tiziana.gelmi@gmail.com)
2024-01-20 05:29:19

*Thread Reply:* I'll be there as a speaker in another symposium, looking forward to see your symposium!!

🙏 charlotte
💯 charlotte
👋 Amrita Gupta
Ronny Hänsch (rww.haensch@gmail.com)
2024-01-08 17:00:46

Happy New Year everybody! Let's start with some good news: EarthVision will be again at CVPR this year!! Get your papers ready and see you in Seattle. https://www.grss-ieee.org/events/earthvision-2024/ (attached a happy memory from last year when @Sara Beery gave EarthVision a very nice shout-out at the last CVPR panel discussion after giving an awesome keynote at the workshop)

❤️ Caleb Robinson, Shir Bar, Devis Tuia, Nico Lang, Chris Lange, Robin Zbinden, Thor Veen, Akash Nagaraj, Alan Stenhouse, Andy Viet Huynh, Luke Sheneman
🎉 Jon Van Oast, Dan Morris, charlotte, David Russell, Oisin Mac Aodha, Georgia Atkinson, Mitchell Rogers, Valentin Gabeff, Yonghao Xu, Andy Viet Huynh
John Payne (drjohnpayne@gmail.com)
2024-01-14 01:28:17

I would be grateful for papers or advice from anyone who has experience in identifying tree species from aerial photography (airplane or drone), using ML. The goal is to find and identify extra-large trees in a closed-canopy tropical forest, for an ecology/conservation research project. The sample size of identified trees to use for training is limited and there is a very long distributional tail of rare species, so our expectation is that only a few common species will be identifiable to species.

Ben Weinstein (benweinstein2010@gmail.com)
2024-01-14 10:09:20

*Thread Reply:* Happy to help. We have been working in this area for a few years. We have an RGB tree algorithm (https://deepforest.readthedocs.io/en/latest/), that has been fine-tuned for tropical forests https://ieeexplore.ieee.org/abstract/document/9387530 (but needs more annotations). 4 or 5 papers on tree species classification in hyperspectral data (https://scholar.google.com/citations?user=7POnELAAAAAJ&hl=en) and we are working on a global tree detection benchmark https://milliontrees.idtrees.org/. We would love to help and have your data included, every dataset gets the community closer to a usable baseline. RGB only classification is hard, but I can point to several interesting papers (a couple attached).

ieeexplore.ieee.org
MillionTrees
🙌:skin_tone_3: Alan Stenhouse
👍 Martin Marzidovsek
John Payne (drjohnpayne@gmail.com)
2024-01-14 14:13:55

*Thread Reply:* Wow, thank you Ben for that rich vein of information. I look forward to reading those references and I’ll respond with questions when I have done so.

Nanticha Ocharoenchai (Lyn) (lynnanticha.o@gmail.com)
2024-02-05 02:39:51

*Thread Reply:* Would suggest reaching out to the Forest Restoration Unit in Chiang Mai, Thailand as they have recently published on this as well http://forru.org

FORRU
Dan Stowell (dan.stowell@naturalis.nl)
2024-01-15 04:44:43

Hi all - If you were looking for audio datasets of animal sound, where would you look? (I know lots of new datasets recently, but I don't think they're all listed in one place. I'd like to fix that... but there's no point listing them where people don't look...)

🙌 Sara Beery
👀 Ben Williams
Carly Batist (cbatist@gradcenter.cuny.edu)
2024-01-15 09:18:46

*Thread Reply:* https://lila.science/otherdatasets#bioacoustics

LILA BC
Written by
lilawp
Est. reading time
32 minutes
👍 Dan Stowell, John Martinsson, Taiki Sakai - NOAA Affiliate, Sam Lapp, Sara Beery
Carly Batist (cbatist@gradcenter.cuny.edu)
2024-01-15 09:19:10

*Thread Reply:* https://bioacousticsdatasets.weebly.com/index.html

bioacousticsdatasets.weebly.com
👍 Dan Stowell, John Martinsson, Taiki Sakai - NOAA Affiliate, Sara Beery
Carly Batist (cbatist@gradcenter.cuny.edu)
2024-01-15 09:21:50

*Thread Reply:* do you mean like labeled datasets? raw soundscape recordings? completely open-source you-can-download-all-files? all of the above?

Dan Stowell (dan.stowell@naturalis.nl)
2024-01-15 09:24:20

*Thread Reply:* Good questions! I guess I mean anything that has already been somewhat curated/selected, and open data. My secret motivation is... I have new PhD students starting this year, and I'd like it to be clear for people like them to know where to find "all the good new bioacoustics datasets we could use in our ML"

John Martinsson (john.martinsson@ri.se)
2024-01-15 10:33:22

*Thread Reply:* I'd be very interested in a curated list of audio datasets of animal sounds as well!

Where would I look? I would personally do a google site:github.com search and look for a GitHub repo which maintains a curated list of animal sounds (did not find any). However, there seems to be a curated list of audio technologies (https://github.com/DolbyIO/awesome-audio, including some audio datasets at the end). When this fails me I would probably just do a google search and see what I find. I may also look at: https://dcase-repo.github.io/dcase_datalist/ which would lead me to the list by Justin that Carly linked (https://bioacousticsdatasets.weebly.com/). I'd completely miss the lila.science list with this approach. Thank you Carly for that link!

Personal thought: There are a lot of lists on GitHub with the "awesome-" where the purpose is to curate a list for a certain topic. Not sure how popular these lists are, but I like the community aspect. GitHub makes it easy for others to contribute with a pull request, and then the community can review and accept the addition and keep the list alive. Seems to be one for bioacoustics by Yann Bayle (https://github.com/ybayle/awesome-bioacoustic), but mainly focusing on references.

Maybe this is an opportunity to create a "awesome-animal-sound-datasets" repository?

Carly Batist (cbatist@gradcenter.cuny.edu)
2024-01-15 11:16:11

*Thread Reply:* There is also another list that would be of interest, which is that of open-source models. For example, the Kitzes lab (@Justin Kitzes @Tessa Rhinehart @Sam Lapp) is working on a bioacoustics model zoo - https://github.com/kitzeslab/bioacoustics-model-zoo/tree/main

Elly Knight (ecknight@ualberta.ca)
2024-01-15 11:32:27

*Thread Reply:* https://wildtrax.ca/ has almost ~750K acoustic recordings where the processed data (the first detection of all species identified by human expert human listeners) is public. You can download flac clips of those detections for model training

Wildtrax
Benjamin Hoffman (benjaminsshoffman@gmail.com)
2024-01-15 14:59:46

*Thread Reply:* there are 10 collected here: https://github.com/earthspecies/beans

Stars
51
Language
Python
Lauren Harrell (laurenaharrell@gmail.com)
2024-01-28 21:44:31

*Thread Reply:* Since it hasn’t been mentioned: https://xeno-canto.org is an awesome call repository for birds, and has some insects and bats as well

👆 Carly Batist
Sam Lapp (sam.lapp@pitt.edu)
2024-01-29 10:29:10

*Thread Reply:* here’s one list more by Tessa, with contents from Justin’s and Dan’s https://docs.google.com/spreadsheets/d/1KrmCB0vvSK7V3znJfycO-eOMZJKP2F-Ih6neRYPz1Xc/edit#gid=0

Jan Huus (jhuus1@gmail.com)
2024-02-08 13:32:37

*Thread Reply:* Another good one is https://www.inaturalist.org/. I used it to get recordings of squirrels and amphibians for my bird sound recognizer: https://github.com/jhuus/HawkEars. It has a nice API too: https://github.com/pyinat/pyinaturalist

inaturalist.org
Language
Python
Last updated
7 months ago
Website
<https://pyinaturalist.readthedocs.io>
Stars
115
Ben Koger (benkoger@gmail.com)
2024-01-15 15:36:02

In case you don't follow #jobs but want to: 2 year postdoc with me (kogerlab.com) combing salmon, bears, drones, computer vision, and applied and fundamental ecological research. Field work in Alaska but based in Wyoming. Apply ideally by Feb. 10: https://eeik.fa.us2.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/234133/?utm_medium=jobshare

👀 Meredith Palmer, Sara Beery
Devis Tuia (devis.tuia@epfl.ch)
2024-01-16 04:09:21

*Thread Reply:* @Ben Koger is great! apply with him!

❤️ Ben Koger, Sara Beery
George Darrah (george.darrah@systemiq.earth)
2024-01-16 06:02:49

Hi everyone - our friends at Ground Effect are hiring a Natural Capital Portfolio and Investment Director, please do shoot me a dm if interesting/you know anyone who'd be up for this! I think it's probably the most exciting nature investing remit out there... across companies, academia and NGOs. https://www.groundeffect.io/

Burooj Ghani (buroojghani@gmail.com)
2024-01-16 08:51:03

Can you design a system that learns from five example vocalizations in a lengthy animal sound recording to identify similar sound events throughout the audio?

We will continue to host the Few-shot Bioacoustic Event Detection challenge at DCASE 2024. Stay tuned and spread the word!

DCASE challenge 2024 short task descriptions are out and the challenge will be live on April 1. https://dcase.community/challenge2024/

👍 Georgia Atkinson, Edward Bayes, Sara Beery, Carly Batist, Douglas Mbura, Benjamin Hoffman, Enis Berk Çoban, Shubhr singh, Maddie Cusimano
Ciera Martinez (ccmartinez@berkeley.edu)
2024-01-17 13:08:25

📣 Come work with me! We have a number of Postdoc positions available and 1-2 of them will focus on leveraging AI for biodiversity monitoring.

DSE, along with an opportunity from our partners in the James M. and Cathleen D. Stone Center for Large Landscape Conservation, is accepting applications from recent PhDs in environmental science and/or data science fields interested in providing domain specific research that informs the development of data enabled solutions to our most pressing environmental challenges.

Competitive pay in a unique academic environment that values open science, inclusiveness, and impact driven research. Please help send to your networks and reach out to me or @Carl Boettiger if you have any questions.

Learn more here: https://dse.berkeley.edu/postdocs

💚 Carl Boettiger, Sara Beery, Meredith Palmer
Patrick Beukema (patrickb@allenai.org)
2024-01-17 16:48:17

Question for the group especially anyone who might know folks at ESA. We are working on a new dataset that depends on WorldCover ESA — WorldCover ESA did not include Antartica, but i came across this press release from Copernicus about a cloud free mosaic (3000 Sentinel-2 images) covering Antarctica: https://sentinels.copernicus.eu/web/sentinel/-/copernicus-sentinel-2-pieces-together-mosaic-of-antarctica That page says “the data are free and open to all scientists and researchers” but I can’t find any link or any other reference to that dataset. Does anyone know about that data or works at ESA who might be able to help?

Sentinel Online
Patrick Beukema (patrickb@allenai.org)
2024-01-17 17:07:12

*Thread Reply:* It might be that one is simply supposed to use the Copernicus browser — but the press release made it sound like there would be some specific data product https://dataspace.copernicus.eu/browser/?zoom=7&lat=-77.75217&lng=-175.61199&themeId=DEFAULT-THEME&visualizationUrl=https%3A%2F%2Fsh.dataspace.copernicus.eu%2Fogc%2Fwms%2Fa91f72b5-f393-4320-bc0f-990129bd9e63&datasetId=S2L2ACDAS&fromTime=2023-01-01T00%3A00%3A00.000Z&toTime=2023-02-28T23%3A59%3A59.999Z&layerId=1TRUECOLOR&mosaickingOrder=mostRecent&demSource3D=%22MAPZEN%22&cloudCoverage=9&dateMode=TIME%20RANGE|https://dataspace.copernicus.eu/browser/?zoom=7&lat=-77.75217&lng=-175.61199&themeId=DEF[…]urce3D=%22MAPZEN%22&cloudCoverage=9&dateMode=TIME%20RANGE

dataspace.copernicus.eu
Patrick Beukema (patrickb@allenai.org)
2024-02-05 19:33:16

*Thread Reply:* i know this wasn’t wildly popular but in case anyone else ends up looking for it, support at Copernicus kindly led me to this: https://s2gm.land.copernicus.eu/mosaic-hub

Devis Tuia (devis.tuia@epfl.ch)
2024-01-18 10:27:02

Hello everyone! I wanted to share with everyone the video of our expedition in Djibouti :flag_dj:, where with a team of data scientists and marine ecologists from Switzerland, Sudan and Djibouti we went studying the status of coral reefs 🪸 at the entrance of the Red Sea. It was an inspirational experience and we are now analysing the data, more news will follow! For now enjoy the beauty of Djibouti! PS: if you are on a schedule, we talk about AI starting minute 6:40 with @Jonathan Sauder 😉

https://vimeo.com/903195229?share=copy

Vimeo
} TRSC (https://vimeo.com/trsc)
🎉 Ronny Hänsch, Sonny Burniston, Dan Morris, Justin Kay, Robin Zbinden, Elizabeth Campolongo, Shir Bar, Chris Llorca, Oisin Mac Aodha, Ted Schmitt, Steve Haddock, Elie Alhajjar, Sara Beery, Aakash Gupta
😍 Carly Batist, Nicolas Arrieta Larraza, Sonny Burniston, Justin Kay, Robin Zbinden, Shir Bar, Chris Llorca, Leonardo Viotti, Gustavo Perez, Mitchell Rogers, Elie Alhajjar, Eric Colson, Subhransu Maji, Aakash Gupta
🙌 Aran Dasan, Elie Alhajjar
❤️ Patrick Beukema, Elie Alhajjar, Nora Gourmelon, Valentin Gabeff, Hannah Kerner, Rebecca Wilks
💪 Giacomo May
Hugo Magaldi (magaldi.hugo@gmail.com)
2024-01-19 10:22:55

👋 Hello everyone

Thrilled to join the community! I'm a French engineer with a PhD in mathematics and 3 years of industry experience in ML/DL, eager to enter the field of conservation. I'm looking to join a challenging (public or private) project where I can be of use, and ideally get some field exposure.

Happy to talk, don't hesitate to reach out!

👋 Sara Beery, Dan Morris, Jacob Adkins, Nicolas Arrieta Larraza, Jon Van Oast, Robin Zbinden, Andrew Schulz, Shir Bar, Elizabeth Campolongo, Mitchell Rogers, Aakash Gupta, Phuc Le, Jason Holmberg (Wild Me), Catarina Silva, Andy Viet Huynh, John Martinsson, Piotr Tynecki, Anton Alvarez, Kishore Panaganti, Martin Marzidovsek
Urs (urs.waldmann@uni-konstanz.de)
2024-01-23 08:00:21

The 4th CV4Animals workshop will take place at CVPR2024 in Seattle!

https://www.cv4animals.com/

We invite submissions in 2 tracks:

  • short 4-page unpublished work (potential invitation to IJCV Special Issue)
  • published work Deadline: March 27, 2024
cv4animals.com
👍 Oisin Mac Aodha, Piotr Tynecki, takatomi-k, Gaspard Dussert, Lukas Picek, Kakani Katija, Devis Tuia, Andrew Schulz
🌟 Meredith Palmer, Anton Alvarez, Piotr Tynecki, Mitchell Rogers
Ben Williams (ben.williams.20@ucl.ac.uk)
2024-01-23 09:33:46

Hoping to tap into collective knowledge with a quick Q. I have a colleague who wants to count the number of seeds in an image dataset they have. All with a nice white background but often with seeds overlapping and lots of added artefacts like other plant matter or inverts etc (see large image below, and a smaller crop of this). The go to route I'm aware of would be something along the lines of spend a long time annotating a bunch of images with bounding boxes and train an object detector.

I'm sure there must be a tool out there that might be smart enough to speed up that annotation process for them, and maybe even train a model directly on top of these. If anyone has any tools they recommend it would great to get recommendations!

🌱 Benjamin Hoffman, Sam Lapp
😎 Jon Van Oast
Lukas Picek (lukaspicek@gmail.com)
2024-01-23 09:48:24

*Thread Reply:* Hi Ben, I would rather use some old-school Computer Vision methods than train some large deep models. You can look for Template Matching, Thresholding, Moments, and Moment invariants. There are already multiple methods implemented within the OpenCV. I hope this helps.

Ben Weinstein (benweinstein2010@gmail.com)
2024-01-23 09:53:24

*Thread Reply:* Definitely try segment anything https://segment-anything.com/demo

segment-anything.com
👍 Lukas Picek, Malte Pedersen
➕ Chris Llorca, Cody Kupferschmidt
Lukas Picek (lukaspicek@gmail.com)
2024-01-23 09:56:49

*Thread Reply:* Good idea, @Ben Weinstein. I just tried it, and the objects are apparently too small for the SaM. I might have done something wrong, but it did not work as intended.

Ben Weinstein (benweinstein2010@gmail.com)
2024-01-23 09:58:30

*Thread Reply:* Does it work on a zoom crop?

Ben Williams (ben.williams.20@ucl.ac.uk)
2024-01-23 09:58:36

*Thread Reply:* Yes should've added I tried SaM too! It does poorly on the full pic, but splitting into smaller pics it does ok. I'm a SaM noob so need to look into off the shelf stuff that can build on this (e.g allow labelling of segments and train a model).

@Kieran McCloskey

Ben Weinstein (benweinstein2010@gmail.com)
2024-01-23 09:59:35

*Thread Reply:* https://labelstud.io/integrations/machine-learning/segment-anything-model/

Label Studio
👍 Ben Williams, Martin Marzidovsek
Kari Kuester (kari.kuester@sunbird.tv)
2024-01-24 04:08:08

*Thread Reply:* Maybe something more old-school with erosion+dilation+edge detection+thresholding+contouring could work here? https://www.youtube.com/watch?v=DsePM4F3tEw

https://shrishailsgajbhar.github.io/post/OpenCV-Apple-detection-counting

YouTube
} Anushka Agarwal (https://www.youtube.com/@anushkaagarwal8250)
shrishailsgajbhar.github.io
Cameron Trotter (cater@bas.ac.uk)
2024-01-24 05:06:27

*Thread Reply:* Hi @Ben Williams, perhaps SegmentEveryGrain could work here? It's an offshoot of SAM that claims to focus grain-like objects. If I remember right a colleague of mine tried using it for ice floe segmentation, though I can't remember how well it worked

Stars
290
Language
Jupyter Notebook
❤️ Ben Williams
Ben Williams (ben.williams.20@ucl.ac.uk)
2024-01-24 05:11:07

*Thread Reply:* These are super useful, thank you! SegmentEveryGrain sounds ideal, will pass it all on

👍 Cameron Trotter
Ben Weinstein (benweinstein2010@gmail.com)
2024-01-24 12:23:44

*Thread Reply:* def report back on SegmentEveryGrain, we've been thinking about retraining SAM for tree crowns and I don't know how worthwhile fine-tuning is, versus just using vanilla.

Aakash Gupta (aakash@thinkevolveconsulting.com)
2024-01-30 06:10:40

*Thread Reply:* @Ben Williams I am curious if you were able to count the number of grains. I got a grand total count of 39,690 grains.

The workflow is provided in this github repo: https://github.com/Think-Evolve-Consulting/Counting-grains and a short blog: https://www.thinkevolveconsulting.com/2050-2-counting-grains/

Ben Williams (ben.williams.20@ucl.ac.uk)
2024-02-26 05:38:21

*Thread Reply:* Thanks Aakash this is awesome! I won't be trying it but my colleague will so I'll pass it on., this could definitely be useful. Perhaps we're onto a new grain counting benchmark!

Jonah Fox (jonahfox@gmail.com)
2024-01-26 08:13:18

Hi everyone - I'm part of a startup trying to improve biodiversity with AI. I'm really interested (but new to) using drones with multispectral sensors mixed with AI. Is there anyone here who does this ?

Casey Clifton (caseyclifton@proton.me)
2024-01-26 08:14:42

*Thread Reply:* Also new to this but very interested in and trying to get involved! Got any key papers or particular tech you're excited about?

Jonah Fox (jonahfox@gmail.com)
2024-01-26 08:15:37

*Thread Reply:* just reading this meta paper to start with !

Jonah Fox (jonahfox@gmail.com)
2024-01-26 08:15:38

*Thread Reply:* https://www.mdpi.com/2504-446X/3/1/9

MDPI
Ben Weinstein (benweinstein2010@gmail.com)
2024-01-26 12:17:13

*Thread Reply:* I work a bunch in this area. Happy to connect you with labs, are you a developer, have a target use case? Kind of organism you are interested in?

✔️ Jon Van Oast
Jonah Fox (jonahfox@gmail.com)
2024-01-26 13:49:20

*Thread Reply:* fab ! - yes im a developer - currently interested in habitat assessment using drones / GIS .

Casey Clifton (caseyclifton@proton.me)
2024-01-29 19:19:55

*Thread Reply:* hey @Jonah Fox & @Ben Weinstein - im also a developer and have actually recently launched a business to work on AI for biodiversity with a small team of AI devs would love to stay in touch & chat about any specific projects / collabs! atm we're working on wildlife camera trap stuff and GIS for analysing forests to predict carbon abatement

Aakash Gupta (aakash@thinkevolveconsulting.com)
2024-01-29 22:49:33

*Thread Reply:* I am interested.

Especially if there is a way to identify metallic traps that are placed by poachers in dense forests. Sometimes these are electrified, leading to significant stress to the trapped animals.

David Russell (davidrussell327@gmail.com)
2024-01-30 09:07:41

*Thread Reply:* Hey all, we're just getting up and running but you may be interested in our work at Open Forest Observatory. We're developing easy-to-use workflows for processing drone imagery for forest ecology. My main project is the multiview mapping toolkit which allows you to do deep learning model training and prediction directly on individual drone images instead of stitched orthomosaics.

openforestobservatory.org
👍 Martin Marzidovsek
Jonah Fox (jonahfox@gmail.com)
2024-02-05 05:10:57

*Thread Reply:* @David Russell @Casey Clifton - would love to have a chat some time ! Are you around this week ?

🙌 Jonah Fox
David Russell (davidrussell327@gmail.com)
2024-02-05 09:08:48

*Thread Reply:* @Jonah Fox that would be great. Would Thursday or Friday morning (say 9-12) EST work for you?

Jonah Fox (jonahfox@gmail.com)
2024-02-05 09:13:48

*Thread Reply:* let's go for Thursday at 9 EST if thats ok

✅ David Russell
Casey Clifton (caseyclifton@proton.me)
2024-02-05 19:17:19

*Thread Reply:* I'll have to pass this time but will check out the above links!

Panayiotis Danassis (pdanassis@g.harvard.edu)
2024-01-26 09:54:24

Hello everyone,

We invite you to submit any work related to social impact to the Autonomous Agents for Social Good (AASG) workshop at AAMAS.

DEADLINE: Feb 29, 2024

For more details, please see: https://panosd.eu/aasg2024/

panosd.eu
:flag_nz: Mitchell Rogers, Jason Holmberg (Wild Me), Sara Beery, Sepand Dyanatkar
Aakash Gupta (aakash@thinkevolveconsulting.com)
2024-01-29 23:25:01

I just realized that older messages are lost in the channel. Whenever I find something useful, I save it for later. But I think because this channel is on the free version of slack the older messages get deleted!

Changed my workflow to copy the links and entire messages to google docs. !

Piotr Tynecki (piotr@tynecki.pl)
2024-01-30 00:08:44

*Thread Reply:* In 2023 and before it was possible to get Slack for free workspace but with Paid plans features (like unlimited history). There are two options for get that: Slack for nonprofits and Slack for education but I see they offer 85% of discount this year.

👍 Aakash Gupta, Jon Van Oast
Elizabeth Campolongo (e.campolongo479@gmail.com)
2024-01-30 09:45:37

*Thread Reply:* Same! Saving for later used to be a work-around.

Eric Price (eric.price@ifr.uni-stuttgart.de)
2024-01-30 10:23:05

*Thread Reply:* until last year, slack would keep a certain number of messages (megabytes) in the free version and delete the oldest when the space limit was reached. it would not delete pinned or saved messages. They changed that. Everything older than 90 days is now hidden, including saved and pinned messages. it sometimes shows it blurred out to tease people to buy the paid plan, internally the history is kept, but inaccessible unless you pay.

😞 Elizabeth Campolongo
Jon Van Oast (jon@wildme.org)
2024-01-30 10:42:11

*Thread Reply:* i wonder if the 90-day-limit applies to the self-DM? i sometimes put things in there.

Eric Price (eric.price@ifr.uni-stuttgart.de)
2024-01-30 10:43:37

*Thread Reply:* self DMs are apparently not affected when they are text, but the limit does apply to files and attachements - those get a placeholder that they are hidden because they are older than 90 days

✔️ Jon Van Oast, Elizabeth Campolongo
Sara Beery (sbeery@caltech.edu)
2024-01-30 12:45:08

*Thread Reply:* I've looked into this several times, but don't have any easy/good solutions. If someone has the capacity to come up with a plan for how to switch this to something that maintains history and how to pay for/maintain funding support for it I would be very welcome to someone leading that charge! Issues I've run into in the past are that with large communities like this the cost is still high even for nonprofits/education. And there are some limitations to both that end up being restrictive (ie we're not all at one institution)

💕 Jon Van Oast
👆 Steve Haddock
Piotr Tynecki (piotr@tynecki.pl)
2024-01-30 12:48:37

*Thread Reply:* @Sara Beery should we consider reaching out to organizations like WILDLABS for support? They could potentially advocate for the community and help us secure Slack for nonprofits, with us covering the remaining 15% of the monthly budget.

Sara Beery (sbeery@caltech.edu)
2024-01-30 12:51:10

*Thread Reply:* They already have their own internal slack that is using their nonprofit ID, and I think you only get one per nonprofit. We could also start a nonprofit, but that seems complicated as well 🙂

😅 Jon Van Oast
Piotr Tynecki (piotr@tynecki.pl)
2024-01-30 12:51:39

*Thread Reply:* Hmmm got it.

Eric Price (eric.price@ifr.uni-stuttgart.de)
2024-01-30 13:17:06

*Thread Reply:* another option is to go back to non-proprietary chat options (XMPP based, , ...) The budget needed to host a free open source chat/messaging server - beyond initial setup - should be a fraction of what Slack demands these days, but the learning curve can be higher and people need yet another app, which makes it harder to connect

❤️ Jon Van Oast
Jon Van Oast (jon@wildme.org)
2024-01-30 14:10:03

*Thread Reply:* i am a fan (and user) of matrix/element - basically use it as a slack/discord substitute for a couple different groups i am part of. but it seems to be a big ask for folks to adopt and learn yet another app. 😕 i wonder if we could find an ai/conservation ngo which does not already use slack (and never wants to!) who might want to sponsor us. 🤔

Chris Yeh (chrisyeh96@gmail.com)
2024-01-30 22:02:31

*Thread Reply:* Discord and MS teams are free and maintain history indefinitely, but those require shifting platforms

Eric Price (eric.price@ifr.uni-stuttgart.de)
2024-01-31 01:59:48

*Thread Reply:* @Chris Yeh these don't solve the problem though - of a single 3rd party provider having control over both server/protocol, client/app and the terms of service. We'd be back to the situation with slack before they changed the rules, hoping they wouldn't change the rules (charge more money/limit features) or like google chat - stopping the service altogether.

Anastasia Pagán (anastasia@wildme.org)
2024-01-31 13:23:23

*Thread Reply:* Let's go back to IRC. Slack, Discord, Teams, etc. are all just modern attempts at reinventing it anyway. 😄

🥲 Jon Van Oast, Eric Price, Aakash Gupta, Kakani Katija
Piotr Tynecki (piotr@tynecki.pl)
2024-01-31 13:24:30

*Thread Reply:* yeah, IRC on freenode 🔥, I spent more than 10 yrs with the black screen

💚 Jon Van Oast
Steve Haddock (haddock@mbari.org)
2024-02-02 12:39:02

*Thread Reply:* Every non-profit / academic Slack I know has been struggling with this. Some are limping along with the truncated history, while others have switched to Discord. Really sad Slack changed their free model, and it is completely unrealistic to get the paid plan for a large but informal academic group like this.

👍 Jon Van Oast
Anastasia Pagán (anastasia@wildme.org)
2024-02-02 12:40:52

*Thread Reply:* Discord has its advantages. Because its primary audience is the gaming community, they're likely to lose a large part of their audience by making it paid to use.

Eric Price (eric.price@ifr.uni-stuttgart.de)
2024-02-02 12:42:31

*Thread Reply:* yeah but the gaming community would let them get away with other things the professional field wouldn't - I could totally see them do stuff like adding advertising revenue based on what topic is being discussed in chat

Eric Price (eric.price@ifr.uni-stuttgart.de)
2024-02-02 12:43:15

*Thread Reply:* unless it causes lag in games, that'd be their death sentence

😂 Anastasia Pagán, Jon Van Oast
Shiva Muruganandham (shivamurug@gmail.com)
2024-02-03 23:30:08

*Thread Reply:* I've been part of a few (pseudo-) academic Slacks that shifted away to Discord for this reason - if there's enough buy-in from the community on here, shifting platforms might work for the better.

Nanticha Ocharoenchai (Lyn) (lynnanticha.o@gmail.com)
2024-02-05 02:37:52

Hi everyone ! Just wanted to drop a hi in here: I'm Lyn and I'm an environmental writer from Thailand :)

Would love to collab with anyone who has a story about their conservation tech/AI initiatives, esp in Asia!

👋 Yves Bas, Robin Zbinden, Chris Lange, Carly Batist, Sara Beery, Jon Van Oast, Ronan Wallace, gvanhorn
Carly Batist (cbatist@gradcenter.cuny.edu)
2024-02-05 08:44:59

*Thread Reply:* Hi! We at Rainforest Connection and Arbimon have lots of projects in SE Asia using sound and AI for biodiversity monitoring! Feel free to email me if you’d like to talk more 🙂 carly@rfcx.org

Brandon Hays (brandon.hays@duke.edu)
2024-02-05 20:49:39

*Thread Reply:* Sawadee krap Lyn! I'm doing my PhD work in Thailand on elephant impacts on forests. I'm going to be doing some drone work and camera trapping, hopefully starting this summer. I'd love to connect and collaborate on some outreach material down the road! Shoot me an email at brandon.hays@duke.edu

Nanticha Ocharoenchai (Lyn) (lynnanticha.o@gmail.com)
2024-02-05 21:27:47

*Thread Reply:* Thank you for your replies @Carly Batist and @Brandon Hays! I will drop you an email now. Excited to hear more 🙂

Serge Wich (sergewich@gmail.com)
2024-03-01 15:50:18

*Thread Reply:* Hi, happy to have a chat. I am involved in https://www.conservationai.co.uk/ Best, Serge

Nanticha Ocharoenchai (Lyn) (lynnanticha.o@gmail.com)
2024-02-05 02:38:29

https://hiimlinn.wixsite.com/lynnanticha

Lyn Nanticha O.
🙌 Jonah Fox, Andrew Schulz, charlotte, Michael Bunsen
👍 Holger Klinck, Devis Tuia, Aakash Gupta
Mohit Dubey (mohit.dubey96@gmail.com)
2024-02-05 13:54:28

Does anyone know of a website for metadata of global plant species? Something like https://www.calflora.org/ but worldwide

Brandon Hays (brandon.hays@duke.edu)
2024-02-05 20:31:10

*Thread Reply:* Not sure if this is what you mean by metadata, but the Global Biodiversity Information Facility has both occurrence data and species fact sheets for a lot lot of species https://www.gbif.org/

gbif.org
Anna Willoughby (arwill19@gmail.com)
2024-02-07 20:22:02

*Thread Reply:* I help maintain a biodiversity dataset repo. You can check it out for plant databases, and feel free to ping me for additions https://earthskysea.org/biodiversity-databases/

Alan Stenhouse (alan.stenhouse@csiro.au)
2024-02-08 00:04:42
Lauren Harrell (laurenaharrell@gmail.com)
2024-02-05 14:49:08

To those of you in Colorado: we're organizing a local meetup of people/organizations involved in conservation technology in the Boulder area on Tuesday, Feb 20 at 2 PM- 5 PM at the Boulder Library. Details are here and RSVP here. If you can't make it but are interested in future events, please indicate in the RSVP form.

wildlabs.net
🙌 Talia Speaker, Jason Holmberg (Wild Me)
Carly Batist (cbatist@gradcenter.cuny.edu)
2024-02-05 15:45:39

*Thread Reply:* so jealous!

Lauren Harrell (laurenaharrell@gmail.com)
2024-02-05 15:46:21

*Thread Reply:* You're always welcome to come visit Carly!

💯 Talia Speaker
stefano puliti (stefano.puliti@nibio.no)
2024-02-13 08:59:07

Hi ,

does anyone has good tips on image sources for forests beyond the GBIF data (inclusive of iNaturalist)? I am generally interested in the tree component rather than the understory 🙂

thanks in advance!

Ben Weinstein (benweinstein2010@gmail.com)
2024-02-13 17:33:55

*Thread Reply:* Do you mean photos from cell phones of individual trees? Or photos of forests from above. What’s the use case, we have collected a lot.

stefano puliti (stefano.puliti@nibio.no)
2024-02-15 06:32:52

*Thread Reply:* I mean terrestrial photos like from cell phones

Autumn Nguyen (ngoc54n@mtholyoke.edu)
2024-02-13 09:26:48

Hi! Does anyone know of any climate or environmental projects that I can contribute to by processing their data and training machine learning models on them? I am going to do a data science and ML project for my course this semester, and though it’d be much easier to work with some popular Kaggle dataset, I want to spend my time working on some real-world ongoing research/development project. I’d appreciate any pointers 🌱!

  • Autumn
Piotr Tynecki (piotr@tynecki.pl)
2024-02-13 09:33:19

*Thread Reply:* Hey @Autumn Nguyen, If you're interested in working with camera trap data (images/videos) for an NGO based in Europe and implementing state-of-the-art AI/CV algorithms to address specific issues, please feel free to reach out to me. I'd be happy to collaborate.

Sara Beery (sbeery@caltech.edu)
2024-02-15 09:00:53

It's that time of year again!! Aka time to start planning weird social events for this community at computer vision conferences 😂

I'm planning to run another computer vision bird walk at CVPR this year, which will be in Seattle June 17-21. Anyone local who knows lots about birds have (1) suggestions for where we should do a short walk and look at birds, ideally not to hard to get to from the conference center downtown, and/or (2) who would like to join and help me lead the walk?

As always, everyone will be welcome, even if you aren't attending CVPR 🦆

💜 Arjun Subramonian (they/them), Nino Migineishvili, Anton Alvarez, Justin Kay, Julia Chae, Suzanne Stathatos, Negar Sadrzadeh, Anastasia Pagán, Edward Bayes, Michael Bunsen, Thijs van der Plas, Andrew Schulz, Ben Weinstein, Leah Brickson, Cara Appel, Alan Stenhouse, Mohamed Elhoseiny, Yunji Jung, Izzy Zhu, Tarun
Devis Tuia (devis.tuia@epfl.ch)
2024-02-15 09:02:36

*Thread Reply:* This time I’ll join, sara (assuming that i am traveling), promised

🙌 Sara Beery
Matt Weldy (matthewjweldy@gmail.com)
2024-02-15 10:00:42

*Thread Reply:* It's always fun to watch the nightly crow migration at UW. Around 10k crows fly in every night to a communal roost. https://environment.uw.edu/news/2021/12/a-story-of-10000-crows-the-nightly-migration-to-uw-bothell-campus/

College of the Environment
👀 Sara Beery
Arjun Subramonian (they/them) (arjun.subramonian@gmail.com)
2024-02-15 10:11:53

*Thread Reply:* Fomo fr

Dan Morris (agentmorris@gmail.com)
2024-02-15 11:40:48

*Thread Reply:* I'm also a fan of seeing the crows descend on Bothell, but only during the winter, because I'm old and go to bed early and it turns out that crows respond to sunlight and don't use clocks.

🐦 Alan Stenhouse
Dan Morris (agentmorris@gmail.com)
2024-02-15 11:54:10

*Thread Reply:* If anyone wants to come over to Redmond (~15 minutes without traffic) really early in the morning, you can visit me in my native habitat, which is the Marymoor Dog Park, which doubles as a suburban wildlife sanctuary. I can't guarantee herons, eagles, ospreys, kingfishers, or beavers, but if you come here around sunrise, I can give you as close to a guarantee as you'll get in wildlife viewing. I can 100% guarantee a golden retriever sighting.

😁 Sara Beery, Alan Stenhouse
😄 Matt Weldy
🐕 Anastasia Pagán, Sara Beery, Shir Bar, Mitch Fennell, Tarun
Anastasia Pagán (anastasia@wildme.org)
2024-02-15 17:10:47

*Thread Reply:* Not birds, but since the convention center is a short walk from the waterfront, it might be possible to spot some orcas from downtown: https://www.oregonlive.com/pacific-northwest-news/2023/10/want-to-see-an-orca-from-the-seattle-shoreline-theres-a-group-chat-for-that.html

😮 Michael Bunsen, Tarun
Cara Appel (appelc@oregonstate.edu)
2024-02-18 17:05:40

*Thread Reply:* Union Bay Natural Area by UW campus is excellent and is one of the most heavily birded areas in the region. I'd love to join!

😍 Sara Beery
💯 Jes Lefcourt, Angela Zhu
Caleb Robinson (calebrob6@gmail.com)
2024-04-01 11:05:03

*Thread Reply:* NYT's "Opinion Today" title is "Birding can change your life" -- https://www.nytimes.com/2024/03/30/opinion/birding-spring-merlin-ebird.html. Made me think about this thread :)

😁 Sara Beery
Justin Kay (justinkay92@gmail.com)
2024-02-15 11:15:16

The call for applications for the winter 2025 Computer Vision for Ecology workshop at Caltech is now open! Please help us spread the word. https://cv4ecology.caltech.edu/call_for_applications.html

We invite applications for the third Computer Vision for Ecology (CV4E) workshop, a three-week hands-on intensive course in CV targeted at graduate students, postdocs, early faculty, and junior researchers in Ecology and Conservation. Each student in the workshop will learn to build computer vision models to help answer their ecological research questions. Students are expected to propose a project as part of their application materials, and clearly define (1) the question they hope to answer, (2) the data they plan to use, and (3) the broader impacts of their work if successful. See here and here for examples of past projects.

Please feel free to reach out if you have any questions. Many students from previous years are also very active in the AI for Conservation community and I'm sure would be happy to answer any questions from the student perspective 🙂

Workshop dates: January 6–24, 2025 Workshop location: California Institute of Technology (Caltech), Pasadena, CA, USA Deadline for applications: March 22, 2024

🤔 Piotr Tynecki
🐋 Shir Bar, Lukas Picek, Holly Houliston, Taiki Sakai - NOAA Affiliate, Timm Haucke, Anton Alvarez, Michael Bunsen, Andrew Schulz, Tarun
🎉 Mark Goldwater, Carly Batist, Dan Morris, Lukas Picek, Gustavo Perez, David Russell, Sara Beery, Bernie Boscoe, Oisin Mac Aodha, Julia Chae, Timm Haucke, Negar Sadrzadeh, Anton Alvarez, Talia Speaker, Shir Bar, Yseult Hb, Andrew Schulz, Priya Donti, Kalindi Fonda, Alan Stenhouse, Rowan Converse, Tiziana Gelmi Candusso
💕 Jon Van Oast, Anton Alvarez, Jonah Fox, Cara Appel
:squirrel: Andrew Schulz
Shir Bar (shirbar@tauex.tau.ac.il)
2024-03-29 02:17:06

*Thread Reply:* Hey all, The deadline for applications to the CV4E 2025 winter workshop has been extended till Monday April 1st! Please consider applying, everyone knows the best thing to do in winter is to sit somewhere sunny and write awesome AI for Conservation code 🙂 See more info in the thread above.

🙌 Carly Batist, Sara Beery, Suzanne Stathatos, Justin Kay, Anton Alvarez, Ishan Nangia
🐋 Taiki Sakai - NOAA Affiliate
😎 Jon Van Oast
🙌:skin_tone_3: Alan Stenhouse
Kalindi Fonda (kalindi.fonda@gmail.com)
2024-02-18 09:11:20

Looking for courses/masters in Ai for conservation.

If longer, preferably remote, but open to all suggestions.

I am doing a computer science masters online, remote and part time (I always like to have some kinds of course to keep learning). I do one course every few months. As I am progressing I figured it would probably be more rewarding if the topics and courses were more specific and more relevant to my interests. So in line with this slack group.

Thank you 💪 🪸 ❤️‍🔥

👍 Aakash Gupta
Aakash Gupta (aakash@thinkevolveconsulting.com)
2024-02-18 11:06:22

Recently released Sora (by OpenAI) could be a gamechanger for those long-tailed events. Consider showing the model a single image of a rare species and asking it to model the behavior. The data generated could then be piped for training more models.

👀 Anton Alvarez, mimi, Jonathan Roberts
🤯 Rita Pucci
❤️ Ben Williams
Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2024-02-20 07:25:16

Hi community, I have a rather ill-posed question. Can someone point me to a website/source that has some quantification on how many different animal species can currently be automatically recognized (models are available, free or not) in images (vis, thermal, etc) ? WildMe website says 54 but platforms are only 17. Maybe I missed something there. Is there any other reliable source? Thanks!

Gaspard Dussert (gaspard.dussert@gmail.com)
2024-02-20 07:51:34

*Thread Reply:* Hi ! You mean for the task of Re-ID or species classification ?

Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2024-02-20 07:52:58

*Thread Reply:* species classification

Piotr Tynecki (piotr@tynecki.pl)
2024-02-20 08:32:53

*Thread Reply:* @Aamir Ahmad feel free to take a look on the Publicly-available ML models for camera traps website powered by @Dan Morris.

You can consider to try DeepFaune or Marbug Camera Traps models for testing, if sets of species are valid for your case.

Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2024-02-20 08:42:01

*Thread Reply:* thanks!! actually, i needed this summary info on approx how many and which species are automatically classifiable. but i will look at those links. Thanks.

Dan Morris (agentmorris@gmail.com)
2024-02-20 10:52:31

*Thread Reply:* I don't recommend quantifying the number of species that can be classified this way; the number of classifiable species is extremely specific to different modalities, different taxa, different ecosystems, etc. And maybe more importantly "can be recognized" squashes subtle questions of accuracy into "can/can't".

That said, I can't help but try to answer the question of what model wins the gold medal for "most species recognized in images", and I'm almost positive it would be the iNaturalist computer vision model.

According to the most recent blog post I can find about their model:

https://www.inaturalist.org/blog/83370-a-new-computer-vision-model-v2-6-including-1-399-new-taxa

...it has ~78k total taxa, and it looks like somewhere on the order of 50% of those are animals. Let's round off to a neat 40k. So, I think the gold medal goes to the iNat model with ~40k animals?

There's an interesting question about how many species "can" be recognized by a multimodal model like Gemini that isn't specific to biodiversity applications. Maybe Gemini has seen more than 40k animal species in training, I don't know. But I think the spirit of the question is "among models whose raison d'etre is recognizing animals...", so, I'm going with iNat.

❤️ Jon Van Oast, Tiziana Gelmi Candusso, Carl Boettiger
👍 Chris Lange
Jon Van Oast (jon@wildme.org)
2024-02-20 11:06:02

*Thread Reply:* fwiw, our production platforms at wild me each cover multiple species. thus, ~17 server/sites, but (likely more than) 54 species. note, however, that not all these species are able to be classified via ML. (e.g. our algorithms do not make a distinction between several giraffe species). so this number is very likely not a good measure of species classification you are looking for.

Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2024-02-20 11:08:16

*Thread Reply:* thanks to both!!

🎉 Jon Van Oast
Alan Stenhouse (alan.stenhouse@csiro.au)
2024-04-30 20:22:06

*Thread Reply:* BioCLIP might be of interest: https://imageomics.github.io/bioclip/

Rowan Converse (rowanconverse@unm.edu)
2024-02-20 17:34:40

Hi all, excited to announce the release of a couple of annotated datasets of UAS imagery of waterfowl from wildlife refuges in New Mexico. Get 'em on LiLA: https://lila.science/datasets/uas-imagery-of-migratory-waterfowl-at-new-mexico-wildlife-refuges/

This is a relatively small set of redundantly annotated images that we used to evaluate how good humans are at identifying birds from aerial imagery, so that we have a better understanding for the inherent level of uncertainty that our deep learning models are learning from. We compared 15 biologists to a bunch of Zooniverse volunteers, and the results are currently in review-- will update here once that article is released.

Planning to update the LiLA page with a lot more annotated drone imagery of waterfowl in the near future-- will announce here also when that is ready.

🙌 Justin Kay, Elizabeth Campolongo, Suzanne Stathatos, Ben Weinstein, Dan Morris, Timm Haucke, Sara Beery, Benjamin Kellenberger, Anton Alvarez, David Russell, Thor Veen
🎉 Jon Van Oast, Taiki Sakai - NOAA Affiliate, Sara Beery, Anton Alvarez, Shir Bar
Jenna Kline (jennamkline@gmail.com)
2024-02-22 15:12:40

*Thread Reply:* hi @Rowan Converse! thanks for sharing this dataset! are you able to share the drone telemetry information associated with the data?

Michael Yair (m1cha3l.ya1r@gmail.com)
2024-02-21 16:08:23

Hi all 👋, I'm Michael and I'm a Fullstack developer and a Data Scientist. I'd like to share with you a web application named STARdbi, I'm taking part in its development, which is about to be published at Ecological Informatics. You can see the website project, which is also the tool itself, right here. Some parts of it are closed only for collaborators, at this point. In a nutshell - we would like to boost up the entomology research with a tool that scales up the data collection, object detection, labeling and classification using high resolution sticky traps images and AI models. 2 examples are available in the scholar. Further links (video demo, gitlab group source code, colab examples) see here. I'm really excited since this is the first time I'm talking about it outside the development forum, and there is so much development to come (follow the gitlab...).

If you'd like to participate, please follow the contacts on the website. I'm here for the SW part, if anyone would like to hear more about it. Have a good one 🍺

👍 Ankita Shukla, Konstantin Klemmer, Suzanne Stathatos, Shir Bar, Aran Dasan, Andrew Schulz, Chris Lange, Sara Beery, Yseult Hb, Aakash Gupta
🦗 Shir Bar
😎 Jon Van Oast
👍:skin_tone_3: Alan Stenhouse
Anna Willoughby (arwill19@gmail.com)
2024-02-22 11:54:16

*Thread Reply:* Hi, none of the webpages loaded... I was using chrome? I'm interested in this, as have hundreds of sticky traps with insects from the southwest with no feasible plan to id them.

Nicolas (nicolas.lecomte@umoncton.ca)
2024-02-22 15:59:50

*Thread Reply:* The links are also not working on safari and chrome on mac…

Michael Yair (m1cha3l.ya1r@gmail.com)
2024-02-23 07:36:57

*Thread Reply:* Yeah... It seams like a cyber security action our IT took, they are informed and probably restore abroad communication by the end of the weekend. Here you may find a youtube playlist with demos. A Colab notebook of the AI training. A demo data set from our database. And the Gitlab group.

Those links can be found on our website too, once it will be available to others outside the university.

It is still under development, and for the safety of the data we and our colleagues gather, to use the tool you need to approach the admin to get username and password,

Have a great weekend!

YouTube
GitLab
Michael Yair (m1cha3l.ya1r@gmail.com)
2024-02-23 07:43:31

*Thread Reply:* The admin: Prof Chen Keasar - keasar@bgu.ac.il

Michael Yair (m1cha3l.ya1r@gmail.com)
2024-03-02 14:31:09

*Thread Reply:* Just to inform all, the cyber attack we were under for a week or so, is behind us. The site is back and active,

Anna Willoughby (arwill19@gmail.com)
2024-03-04 16:29:23

*Thread Reply:* thats great news! ill check it out

Michael Bunsen (notbot@gmail.com)
2024-03-07 12:00:44

*Thread Reply:* Awesome, thanks for the update. I look forward to trying it out!

Cynthia Wu (cynthiaswu@gmail.com)
2024-02-26 19:43:07

Hello! Does anyone know what types of tree disease can be detected by NDVI? Are there certain types of diseases that are less detectable?

I am looking at a case where root fungus was found inside many eucalyptus trees that needed to be removed. However, looking at the trees, the NDVI from the previous year seemed quite good. Is it because the fungus develops quickly, or because the fungus does not affect the leaves, and therefore NDVI cannot catch it?

It was caught using sonic tomography, I’m wondering if near infrared images of the trunk would have caught it too?

Akshit Gupta (akshitgupta1695@gmail.com)
2024-04-02 17:06:37

*Thread Reply:* Hi Cynthia, I saw this message now. Were you able to resolve this already? I am also curious about this and would love to know your findings

Evan Eskew (eveskew@gmail.com)
2024-02-26 20:24:46

Hi Cynthia, interesting question! It's not my area of expertise, but maybe this will help point you in the right direction.

I'm guessing the problem is partly that these fungal diseases can spread rapidly (meaning even relatively new data is too old to be useful) and/or the impacts within a given plant are spotty: https://www.fs.usda.gov/nrs/pubs/jrnl/2022/nrs_2022_sapes_001.pdf We treated each pixel as an observation because oak wilt disease does not manifest uniformly across the canopy of a tree, especially during early stages of infection. At early stages, the fungus may have infected only a fraction of the vessels within the tree trunk. Thus, curtailing the water supply to a few branches that become symptomatic while others remain asymptomatic. Treating pixels -rather than the whole tree- as observations is critical to prevent false negatives that result from early infected trees displaying a small number of symptomatic pixels. It seems others do use different measures to try to get at what NDVI can miss: https://ageagle.com/use-cases/using-the-micasense-chlorophyll-map-to-identify-fungus-missed-by-ndvi/

🌳 Cynthia Wu
❤️ Cynthia Wu
Cynthia Wu (cynthiaswu@gmail.com)
2024-02-27 03:56:17

*Thread Reply:* Thank you so much Evan for such a thoughtful and helpful response! We’re going to try some of these and see how it goes. Really appreciate it!!

👍 Evan Eskew
George Darrah (george.darrah@systemiq.earth)
2024-02-27 05:15:36

Our friends at Silverstrand are launching applications for their Biodiversity Accelerator... if you're building a biodiversity focused venture at pre-seed/seed stage, this is for you! Can vouch for how awesome these folk are... https://www.linkedin.com/posts/silverstrand-capitalimpact-innovation-capacitybuilding-activity-7165515765654171648-ZArZ?utmsource=share&utmmedium=memberdesktop|https://www.linkedin.com/posts/silverstrand-capitalimpact-innovation-capacitybuildin[…]765654171648-ZArZ?utmsource=share&utmmedium=memberdesktop

linkedin.com
❤️ Cynthia Wu
👍 Martin Marzidovsek
Xiaojuan Liu (xjliu@climatechange.ai)
2024-02-28 14:48:24

Hi AI for Conservation community! My name is Xiaojuan, and I’m a research scientist at Climate Change AI (CCAI). I’m reaching out to see if any of you would be open to participating in either a 60-minute group virtual workshop session or a one-on-one interview focused on critical data gaps in using machine learning for work in biodiversity/ecosystems. CCAI, partnering with Google DeepMind, is conducting a global stocktake of critical data gaps that inhibit effective machine learning (ML) solutions to climate-related challenges. The goal of this initiative is to report on the most important data gaps, and lay out pathways for funders, data providers, and researchers to address them.

If you’re interested in joining the group workshop, may I ask you to indicate your availability in the following poll (by March 15): http://whenisgood.net/3xpsa9c?

If you’re interested in participating in a one-on-one interview, please book a time here: https://calendar.app.google/RbDx9GkeDiVxBYJA8

Thank you, and we hope you can join us!

whenisgood.net
👍 Justin Kay, Carly Batist, Sara Beery, Ankita Shukla
Sara Beery (sbeery@caltech.edu)
2024-03-02 17:01:45

Two week environmental hackathon coming up in May with ETH AI Center: https://hack.biodivx.org/

hack.biodivx.org
🌿 Suzanne Stathatos, Robin Zbinden, Edward Bayes
:female_technologist: Suzanne Stathatos, Negar Sadrzadeh, Robin Zbinden
❤️ Patrick Beukema, Yseult Hb, Millie Chapman, Chase Van Amburg, Robin Zbinden, Amee Assad, Alison Ketz, Jon Van Oast, Ronan Wallace, Jennifer, David, Olivier Dietrich, Alan Stenhouse
👍 Piotr Tynecki, Martin Marzidovsek
👍:skin_tone_5: Prabath Gunawardane
😎 Jon Van Oast
Ronan Wallace (rwallace@macalester.edu)
2024-03-05 21:12:28

*Thread Reply:* Thanks for sharing this Dr. Beery! Seems like a great opportunity, and I would love to pass it on to others. Do we know if this is exclusively an in-person event? I can't seem to find a distinction between in-person and virtual participation.

👍 Jennifer
Sara Beery (sbeery@caltech.edu)
2024-03-06 04:55:49

*Thread Reply:* I'm not sure! You could reach out to the organizers and ask maybe?

Sara Beery (sbeery@caltech.edu)
2024-03-06 04:56:23

*Thread Reply:* @David?

David (dwddao@gmail.com)
2024-03-16 07:21:46

*Thread Reply:* Thanks for sharing, we are aiming to make it hybrid!!! @Ronan Wallace 🌿

Sara Beery (sbeery@caltech.edu)
2024-03-04 14:45:59

East Coast opportunity to present student work:

Are you a graduate student, post-doctoral fellow, or early-career professional (from anywhere in the world!) pursuing or considering the field of conservation? Join us at the Student Conference on Conservation Science (SCCS-NY) at the American Museum of Natural History, October 9-11, 2024! Applications to present a talk, speed talk, or poster are due Monday, April 1 at 5:00 PM EDT. Learn more about this year's conference & how to apply: amnh.org/sccsny

😎 Jason Holmberg (Wild Me), Jon Van Oast, Arjun Subramonian (they/them), Andrew Schulz, David Russell, Eric Greenlee, Elizabeth Campolongo
🎉 Jon Van Oast, Anton Alvarez, annie finneran, Alan Stenhouse
👍 Luke Sheneman
Davide Coppola (davidc9320@gmail.com)
2024-03-05 22:39:09

🙋‍♂️ Hi everyone! My name is Davide, I'm a Data Scientist and AI Engineer. For the last 5 years I have been studying and working in Singapore, mainly focusing on Computer Vision, AI robustness and applications to the domain of healthcare. 🌳 Nature and wildlife have always been a cornerstone interest of mine and over the last year I started dedicating my skillset to AI4Good applications, specifically for conservation efforts. I did this thanks to the challenges on the FruitPunch AI platform, which has been a great experience so far. 🐦 Outside of work and AI, I have plenty of hobbies and among them are wildlife photography and birding! 🤝 I am here to expand my network with other people from around the world who share the same objectives and to find new opportunities to put my skills to good use. :flag_nz: Lastly, I am currently looking to move to New Zealand, so if you're from there I would be very happy to connect!

:flag_nz: Mitchell Rogers, Shir Bar, Cynthia Wu, Alan Stenhouse
👋 Ed Miller, Ștefan Istrate, Shir Bar, Robin Zbinden, Anton Alvarez, Cynthia Wu, Alexander Merdian-Tarko
:bearid: Ed Miller, Anton Alvarez, Cynthia Wu
Ed Miller (ed@hypraptive.com)
2024-03-06 01:38:38

*Thread Reply:* Welcome, Davide, and thank you for participating in the AI for Bears challenge!

🐻 Davide Coppola
Davide Coppola (davidc9320@gmail.com)
2024-03-06 02:16:40

*Thread Reply:* Thanks Ed! I'm very happy to be able to contribute to this project 🙂

🙌 Ed Miller
Nate Harada (nharada1@gmail.com)
2024-03-06 06:21:13

Hey y’all! I’m Nate, a longtime machine learning researcher and practitioner who now has to brand himself as “AI guy” thanks to the current hype. I’ve spent nearly all my career (and grad school) in machine perception across a very wide modality of sensors (camera, lidar, audio, medical sensors like ECG and PPG, accelerometer, radar, some I’ve probably forgot).

A few quick hits:

• I’m the creator and maintainer of Moonshine which is open source pre-trained remote sensing models for satellite. I’m looking into branching out to drone or aerial photography as well. I think modern computer vision foundation models (which I’ve spent the last 2 years on) have a lot to offer here. • I also recently built Zeroshot which helps create efficient classifiers using just text (i.e. type “shark” and get a shark classifier). • The majority of my conservation focus is climate focused, and I’ve done work as well for UNICEF helping with refugee tracking for aid resourcing, but I’m looking at spending more focus on conservation specifically. I’m new so please be gentle! • I’m based in SF, although I’m currently traveling Asia for vacation which is why it’s like 3am Pacific. • Outside work I love to cook and bake (both!), play and listen to music, and fly paragliders. If you do any aerial conservation work such as drones, fixed wing, etc I’d love to hear what kinds of problems you’re working on and what pain-points you face 🙏

🙌 Shir Bar, Amee Assad, Justin Kay, Dante Wasmuht, Suzanne Stathatos, Omiros Pantazis, Dan Morris, Henrik Cox (Sentinel), Chase Van Amburg, Rebecca Wilks, Abhay, Elizabeth Campolongo, Cynthia Wu
😎 Jon Van Oast, Elizabeth Campolongo, Cynthia Wu
🙌:skin_tone_3: Alan Stenhouse
Jonah Fox (jonahfox@gmail.com)
2024-03-06 08:45:06

*Thread Reply:* wow just checked them out - really nice projects!

Amee Assad (aa3628@columbia.edu)
2024-03-06 09:23:50

*Thread Reply:* Cool projects!

Dante Wasmuht (dante@conservationxlabs.org)
2024-03-06 09:58:30

*Thread Reply:* @Lasha Otarashvili @Jason Holmberg (Wild Me)

❤️ Lasha Otarashvili
Brandon Hays (brandon.hays@duke.edu)
2024-03-06 11:25:06

*Thread Reply:* @David Johnston moonshine could be cool for coral reef segmentation

Dan Morris (agentmorris@gmail.com)
2024-03-06 11:57:08

*Thread Reply:* Re: Moonshine...

While everyone else is complimenting the Moonshine library, I'd like to take a moment to compliment your logo. I see you, "O" shaped like a moon.

Re: drones and conservation...

You may find this page useful; I try to keep track of open datasets and open models in this area:

https://github.com/agentmorris/agentmorrispublic/blob/main/drone-datasets.md

I'm not saying you have to go train a big model on all that data and put it in the Moonshine package, but if you were already thinking of adding object detection... just saying...

Re: music...

If you're ever in the PNW, we can jam and talk about conservation technology; my real contribution to society is:

http://awesomesongbook.com/

Ben Weinstein (benweinstein2010@gmail.com)
2024-03-06 12:01:49

*Thread Reply:* Hi Nate, welcome. We have worked in this area for a couple years on tree detection, bird species classification from airborne imagery. A quick list of challenges for the field might include • Generalization across image resolution for multiple data acquisition systems • Cross-sensor calibration for remote-sensing with different numbers of bands • interactive tools for batch labeling across hundreds of images using unsupervised clustering of embedded features • cross-geometry prediction of different annotation types, so going between point/polygon/boxes to make sure we have access to every data source we can get (see SAM-geo). We work on a number of benchmark datasets for tree detection and developing more (https://milliontrees.idtrees.org/), and release our models through a python package https://deepforest.readthedocs.io/en/latest/prebuilt.html. There are large datasets that speak all these challenges, if interested, or anyone else here. @Devis Tuia’s team works a bit more on some of the robotics/engineering challenges through their new collaborative that is really interesting https://wilddrone.eu/

MillionTrees
WildDrone - Drones for Nature Conservation
Nate Harada (nharada1@gmail.com)
2024-03-06 19:56:48

*Thread Reply:* Wow thanks for the warm welcome everyone, it’s super cool seeing what everyone is up to! I might reach out to some of you to try and learn more about what you’re doing, but also feel free to message me if you wanna chat sometime!

Lasha Otarashvili (otarashvililasha@gmail.com)
2024-03-08 10:56:25

*Thread Reply:* Hi Nate! Cool projects.

I work with the Wild Me team for aerial surveying with the Scout (UI) & Scoutbot (ML plugin) products. Currently Scout is being used for aerial surveys in Africa in Kenya and Kavango-Zambezi transfrontier, with expansion work underway. Two challenges we are currently working to solve: • Imbalanced data - given the long-tailed distribution of species, trying to predict the objectness along with the species label often reduces performance for objectness, even with a decoupled head. • Dense images - tight herds. Two problems - obtaining ground-truth labels is hard when there are 1000+ objects present in a single image. Models trained on less dense images have a hard time stepping up. I feel solutions will be outside of localization-as-bbox approach. Happy to chat if interested.

Stars
5
Language
JavaScript
Stars
4
Language
Jupyter Notebook
:zebra_face: Jason Holmberg (Wild Me)
😎 Jason Holmberg (Wild Me)
Elizabeth Campolongo (e.campolongo479@gmail.com)
2024-03-21 19:07:40

*Thread Reply:* @Dan Morris, awesome list! I'm bookmarking some of these to look at later. Also, KABR is actually available on Hugging Face (along with the telemetry data from the flights).

Dan Morris (agentmorris@gmail.com)
2024-03-21 19:19:42

*Thread Reply:* Glad the list is useful. And thanks for the pointer, I added the HF link to the listing for KABR.

👍 Elizabeth Campolongo, Jason Holmberg (Wild Me)
Cynthia Wu (cynthiaswu@gmail.com)
2024-04-02 17:46:14

*Thread Reply:* Welcome @Nate Harada! I have been working on tree monitoring at Taro AI 🌳 (taroai.com). So cool what you’re building at Moonshine, it looks so useful! I’m also located in SF if you ever want to go for a walk!

Sara Beery (sbeery@caltech.edu)
2024-03-06 13:12:14

https://twitter.com/WildAudioJack/status/1764595424608821691

X (formerly Twitter)
🎧 Jon Van Oast, Lasha Otarashvili
🐟 Carly Batist, Maddie Cusimano, Benjamin Hoffman, Burooj Ghani, Alba Márquez-Rodríguez, Holly Houliston, Shir Bar
🔊 Carly Batist, Michael Bunsen
💚 Benjamin Hoffman, Michael Bunsen, Martin Marzidovsek
👀 Ben Williams
charlotte (deshchang@gmail.com)
2024-03-08 17:21:35

Hi all! For potential NACCB (North American Congress for Conservation Biology, Vancouver, June 2024) attendees & presenters: note that there are limited travel funds available for students/early career scholars and the Google Form applications are due on March 15, 2024. Otherwise, you can register for the conference and get a discounted rate by the early bird deadline of April 12, 2024.

❤️ Sara Beery, Remi Gosselin
Mai Lazarus (mai.lazarus@gmail.com)
2024-03-12 07:27:23

Hi all, I'm new here, and it's great to get to know such an important space. I'm an ecologist (with no experience in AI) studying fish communities across life stages, and one of my projects revolves around juvenile coral reef fish. I have 7 very similar juvenile Parrotfish species that I would like to identify to the species level. I don't need to have an ID as an output, just to cluster ~100 images of these species to groups of the most similar ones. Is there some available application that does this? Any input would be appreciated. Thanks!

🐟 Shir Bar, Justin Kay
🐠 Shir Bar
Ben Weinstein (benweinstein2010@gmail.com)
2024-03-12 12:55:17

*Thread Reply:* can you show us a few images? Draw the thing in the image you want as a output.

Mai Lazarus (mai.lazarus@gmail.com)
2024-03-17 07:55:07

*Thread Reply:* Hi, attaching some images. What do you mean by what I would want as output?

Ben Weinstein (benweinstein2010@gmail.com)
2024-03-17 10:11:16

*Thread Reply:* machine learning models take in data and return predictions do you want to 1) label each entire image with a species -> classification, 2) Label each individual in an image to species -> detection. If you want to classify them, how many images do you have of each species? Can you go to other data sources and get more from citizen science (iNat?). @Justin Kay what's the latest on the fish foundation models. @Mai Lazarus do you know any R or python, or are you look for a desktop application?

Mai Lazarus (mai.lazarus@gmail.com)
2024-03-18 07:22:49

*Thread Reply:* Hi, I would like to get a classification, the identity within each picture is not as important (I would like to crop images with several individuals to have as many images as possible, if the resolution is good enough). I have about 100 images and for sure I can get much more from other data sources. I am quite experienced with R. Thanks for the help!

Ben Weinstein (benweinstein2010@gmail.com)
2024-03-18 10:17:52

*Thread Reply:* probably start here. https://cran.r-project.org/web/packages/keras/vignettes/ You'll probably need atleast 50 images for most classes to get a decent model. I don't know the fish space as well, hopefully someone can jump in and say if we have a backbone model to start training from.

👍 Justin Kay
🙏 Mai Lazarus
Ben Weinstein (benweinstein2010@gmail.com)
2024-03-12 13:27:54

@Nate Harada and I had nice chat and we discussed some of the needs for model-assisted computer vision annotation environments. I am obviously biased towards airborne tools. I jotted down some thoughts here: https://github.com/weecology/AirborneFieldGuide/blob/main/README.md, copied below, what else would be really useful? Let's brainstorm! Calling out a few members that have experience here -> @Benjamin Kellenberger @Jon Van Oast @Ben Koger @Dan Morris @Zhongqi Miao ```# Airborne Field Guide

Ideas, guiding principles, and wish list

Human review is here to stay. We need rapid model integration to create faster labeling environments specific to airborne biological use-cases.

* Create an annotation platform that using existing tools (e.g. Label-studio) to detect and classify biological objects. We don't want to get bogged down in re-inventing annotation classes and UI. * Pre-annotate imagery with existing model classes and varying levels of taxonomic detail ("duck" versus "White-winged Scoter") * Batch labeling tools that operate on the flight or campaign level to interactively cluster and label detections grouped by distance in embedded space. In the clustering you can re-label groups of points with existing or new labels. Clicking on individual points takes you to images. * Prompt-level engineering to find images with certain characteristics. "Find me all the images over water" * Both detection level and image-level query to find detections similar to target. * Pre-computed zoom levels based on detection density -> https://openaccess.thecvf.com/content_CVPRW_2020/papers/w11/Li_Density_Map_Guided_Object_Detection_in_Aerial_Images_CVPRW_2020_paper.pdf * Nightly model training and re-labeling, re-clustering. * Label propogation at the image level. If I click on one animal in a flock/herd, it should auto-update nearby objects. * Label propogation at the annotation level, using SAM to go between points, boxes, and polygons. * On new mission, draw a bounding box of geographic area, query the ebird API/map of life/Inaturalist to get abundance curve and filter species list.

Remaining questions

* Local Desktop installer? Especially for field researchers around the world? A stripped down version. * How to learn from AIDE? From Scout? Fathomnet Portal, SAM-geo. Should we just merge with there? How do we promote community collaboration and avoid re-invention. * https://www.tator.io/? Another option.
*
BioCLIP foundation model -> https://arxiv.org/abs/2311.18803 versus more bespoke models? Engaging teams INat/Ebirds teams.```

👀 Suzanne Stathatos, Timm Haucke, Rebecca Wilks, Shir Bar, Edward Bayes, Subhransu Maji, Sara Beery, Elizabeth Campolongo, Sam Lapp
🎉 Jon Van Oast, Elizabeth Campolongo, Alan Stenhouse
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2024-03-12 13:28:43

*Thread Reply:* @Lasha Otarashvili

Suzanne Stathatos (suzanne.stathatos@gmail.com)
2024-03-12 13:32:08

*Thread Reply:* @Kakani Katija

Dan Morris (agentmorris@gmail.com)
2024-03-12 14:00:04

*Thread Reply:* My go-to recommendation has been "ask Ben Weinstein for his Label Studio template and do exactly what he did"; I'm not sure what I should make of the fact that my recommendation for Ben's setup is stronger than Ben's recommendation for Ben's setup. But FWIW, Ben, what you're doing felt like exactly the right amount of not reinventing the wheel, but also exactly the right amount of not taking dependencies on complicated ML features in Label Studio (rather just having good tools for fine-tuning and running models outside of LS, e.g. overnight, and loading the results into LS). So IMO a great starting point would be for you to document what you've done with Label Studio to the point where others could try it. Eh?

👍 Ben Weinstein, Sara Beery
😂 Jon Van Oast
👍:skin_tone_3: Alan Stenhouse
Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2024-03-12 14:07:02

*Thread Reply:* Nice list above! We have been actively developing SmarterLabelme https://github.com/robot-perception-group/smarter-labelme, for many of the listed tasks, focusing on aerial videos with many individuals in the view. We are using it for detection, tracking, label propogation, etc. but also a lot for quickly auto-annotating behaviors... feel free to check out or ask me about it. The behavior version is here: https://github.com/robot-perception-group/animal-behaviour-inference

Stars
13
Language
Python
👍 Ben Weinstein
😎 Jon Van Oast
Ben Weinstein (benweinstein2010@gmail.com)
2024-03-12 14:23:57

*Thread Reply:* That's great, can you talk alittle about what the largest time investments were? What have you learned so far?

Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2024-03-12 14:42:10

*Thread Reply:* Sure. It depends on the exact task, but typically one would spend most time in the initial expert labeling to get a good detection model. If such a model exists (which is the case for several species now), then you gain speed even there. For behaviors classification, for example, we spent a couple of days to annotate some initial sequences. Then used it to learn a classifier, which was then used to auto annotate (using the framework) to 'quickly' annotate more and more and then use those many new annotations further improve the accuracy of the model. What we have learned -- one thing, at least for the behavior classification task, that stands out to us is that some behaviors are very 'temporal' in nature and hard to learn from 'still annotations'. Another thing is that to use the auto-generated annotations for retraining, one has to still go over the annotations to correct the few wrong ones. Even though this is super fast than what it would take to annotate all those videos in the first place, it is still unavoidable.

Ben Weinstein (benweinstein2010@gmail.com)
2024-03-12 14:56:58

*Thread Reply:* Great! but I meant what the time investments were for designing the system. What took the most time to create, using QT, image backend, model deployment?

Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2024-03-12 14:59:56

*Thread Reply:* @Eric Price can answer that best

Eric Price (eric.price@ifr.uni-stuttgart.de)
2024-03-12 15:11:21

*Thread Reply:* Hi. We, too extended an existing open source tool for annotation and integrated models for detection and tracking into it, so we spent almost no time on the user interface, except adding a few buttons/key shortcuts for the new functionality. Maybe 2 days to understand the code and API, and one more day extending GUI. - but depending on the complexity and functionality of the tool that can be more elaborate, especially for web based tools. The main work was definitely model deployment - on the order of weeks, but thats mostly because we combined multiple models and extended them beyond what the base models could do on their own so there was new functionality. Deploying an existing model should be much quicker 1-2 days maybe. it depends also if the frameworks are compatible. in our case the GUI was python and the model was written in pytorch, so we could just call it directly, but there's combinations where that could be more time consuming, especially for cloud based tools.

Eric Price (eric.price@ifr.uni-stuttgart.de)
2024-03-12 15:12:30

*Thread Reply:* does that answer your question?

👍 Ben Weinstein, Sara Beery
Kakani Katija (kakani@mbari.org)
2024-03-12 16:13:51

*Thread Reply:* @Ben Weinstein been meaning to email you back and set up a time to chat. In two weeks?

Ben Weinstein (benweinstein2010@gmail.com)
2024-03-12 16:32:02

*Thread Reply:* for sure, I know you are super busy, take your time.

Ben Weinstein (benweinstein2010@gmail.com)
2024-03-12 17:43:10

*Thread Reply:* I feel like I should also mention VIAME here @Daniel Davila https://www.viametoolkit.org/

👍 Aamir Ahmad, Sara Beery
Jonah Fox (jonahfox@gmail.com)
2024-03-13 16:55:50

*Thread Reply:* wow some amazing stuff here guys!

Jonah Fox (jonahfox@gmail.com)
2024-03-13 16:57:13

*Thread Reply:* I'm interested in tagging of aerial data for habitats. E.g. "road", "tree", "hedgerow" and then more specific if there's the detail - e.g. specific species - or "species rich hedgerow" Does anyone know of any work in this area in particular ?

Lasha Otarashvili (otarashvililasha@gmail.com)
2024-03-14 14:21:43

*Thread Reply:* Something that could be useful - dealing with image overlap. Many fly-by styles that do continuous collection produce images that are partially overlapping - making it possible to overcount the bounding-box predictions unless dealt with. Scout has a feature to draw a straight line to discard an area in the image that appears elsewhere. This works well for plane-mounted cameras as they fly in long straight lines - so there is only one axis of overlap. Drone-mounted cameras might produce overlap in both dimensions - think of a reference image having 4 adjacent overlapping images from N,E,S,W.

Sachin Wani (sachin27071998@gmail.com)
2024-03-13 10:31:33

Hi all! I was invited to this Slack group by Dan Morris. I am a new Data Scientist currently at Lenovo in NC, USA. I did my Masters at Rutgers (graduated last Summer). I was lucky to work with Island Conservation to build a custom-trained classifier that can detect native species of Robinson Crusoe Island (off the coast of Chile). Thanks to Dan's help and Kyra Swanson's animl-py repository, I was able to build a classifier that works on top of MegaDetector to detect and recognize Coatis, cats, rabbits, rodents, birds, and other mammals. I look forward to interacting with you all in this group and it's great to be part of this community of amazing folks!

🙌 Justin Kay, Suzanne Stathatos, Dan Morris, Jon Van Oast, Ted Schmitt, Piotr Tynecki, Jason Holmberg (Wild Me), Bernie Boscoe, Chris Lange, Elizabeth Campolongo, Omiros Pantazis, Mohit Dubey, Sara Beery, Shir Bar, Ben Williams, David Will
David Will (david.will@islandconservation.org)
2024-03-20 22:36:41

*Thread Reply:* Great to see you here Sachin!

❤️ Sachin Wani
Anna Willoughby (arwill19@gmail.com)
2024-03-21 11:50:28

*Thread Reply:* Hi! it seems like your classifier has a lot overlap in species with my camera trap project canyoncritters.org I'd be interested in applying it if its available

Sachin Wani (sachin27071998@gmail.com)
2024-05-02 08:41:04

*Thread Reply:* @Anna Willoughby Hi, I would be happy to help. I am not much active on slack, but you can send me an email (sachin27071998@gmail.com) and I can share the model weights that can be used with animl-py repository. https://github.com/conservationtechlab/animl-py

Stars
3
Language
Jupyter Notebook
Jennifer (jzhuge@alumni.cmu.edu)
2024-03-13 14:15:20

Hey all, those who were in normal tech and managed to successfully break into climate tech, did you do volunteer work on top of a job or quit your job and do unpaid AI for Conservation work before you were able to get an offer in climate tech? I've been applying to all these climate tech jobs (not limited to AI for conservation) to radio silence (I have an MSc in AI).

➕ Nate Harada, Alexander Merdian-Tarko
❤️ Kalindi Fonda
Cynthia Wu (cynthiaswu@gmail.com)
2024-04-02 17:25:21

*Thread Reply:* When I was at Google, I got started by doing a 20% project in climate!

Tim Gardner (circuitformation@gmail.com)
2024-03-13 18:54:39

Hi All, I just heard about this group and joined. Our research team at the University of Oregon is developing self-supervised models (BERT related) to automatically parse animal vocalizations. I described some of our preliminary work in this recent talk for the Interspecies Internet - link below. I hope that we can apply these models to conservation challenges in addition to research on animal vocalizations. If we consider models built to detect the presence of a species in environmental recordings, I think it will be interesting to research what kinds of self-supervised pre-training processes lead to the most sample-efficient fine-tuned models for conservation... Happy to chat! https://www.interspecies.io/lectures/timgardner

Interspecies Internet
😎 Jon Van Oast, Elizabeth Campolongo
🐦 Dan Morris, Shir Bar, Sankalpa Ghose, Chase Van Amburg, Sara Beery, Alan Stenhouse
🙌 Ben Williams
Cara Appel (appelc@oregonstate.edu)
2024-03-14 14:37:13

*Thread Reply:* Hi Tim, so interesting to hear about your research! There are quite a few of us at Oregon State University who are using bioacoustics to detect birds. It would be great to connect sometime.

Tim Gardner (circuitformation@gmail.com)
2024-03-15 12:03:39

*Thread Reply:* I'd love to connect! Feel free to reach out to timg@uoregon.edu and we can schedule a time to talk.

Petar Gyurov (pgyurov93@gmail.com)
2024-03-14 10:34:18

Hi guys. I’m curious to hear what tools others here use for labelling? At my company, we’ve been self-hosting the open source edition of LabelStudio but it doesn’t check all the boxes for us.

We’re a small team so, I’m interested in something that is: • handles labelling for classification, object detection and segmentation (bonus points for video) • has some sort of data and user management • self-hostable • ideally free, but open to affordable pricing I am doing my own research too but wanted to see if there’s something people strongly recommend! Thanks.

Eric Price (eric.price@ifr.uni-stuttgart.de)
2024-03-14 11:34:58

*Thread Reply:* Not sure if that's what you are looking for: We developed an open source tool for video annotation (smarter-labelme). It can do classification, object detection, but only limited segmentation. For bounding box annotation, it uses inbuilt detector and trackers to streamline the annotation process (the tool makes annotations and the user corrects them as needed). Its using python+QT and is not web based, ideally using a computer with GPU for running the detection/tracking models. There's no inbuilt data or user management though, beyond keeping track of the annotated frames per video in the form of json files, We handled that by handing out individual video files to annotators and then collecting the annotation data on a shared folder. https://github.com/robot-perception-group/smarter-labelme

Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2024-03-14 11:45:28

*Thread Reply:* Thanks @Eric Price... @Petar Gyurov there is a recent discussion on this in a thread 2 messages above in the general channel!

Ben Weinstein (benweinstein2010@gmail.com)
2024-03-14 11:50:28

*Thread Reply:* linking to above thread https://aiforconservation.slack.com/archives/CLWGQ4BJ6/p1710264474399949

} Ben Weinstein (https://aiforconservation.slack.com/team/UMDLZLL1K)
Petar Gyurov (pgyurov93@gmail.com)
2024-03-14 12:20:41

*Thread Reply:* Thanks, I wasn’t able to see the previous messages in the channel for some reason but I can now. @Eric Price , Smarter-Labelme looks great, but I am not sure it fills the gaps that my team faces in LabelStudio. Will definitely keep an eye on its development though.

Eric Price (eric.price@ifr.uni-stuttgart.de)
2024-03-14 13:14:25

*Thread Reply:* Thanks. If you know any features that you'd like to see implemented or think are missing, feel free to tell us 🙂

Dan Morris (agentmorris@gmail.com)
2024-03-14 14:59:50

*Thread Reply:* @Petar Gyurov Which of those boxes does the self-hosted Label Studio not check? I'm not claiming it's perfect, it just seems to check those boxes, and I've been impressed with it to the degree that I've tried it. My only gripes with Label Studio were that for very small (n=1 person) efforts where some amount of customization is required, it was easier to quickly modify a client-side tool like Label Me, but Label Studio still feels like the right solution for what you're describing.

👀 Sara Beery
Petar Gyurov (pgyurov93@gmail.com)
2024-03-18 04:48:59

*Thread Reply:* @Dan Morris You’re right, I left out some detail. Here’s some things we struggle with on the free version: • user management — I believe the paid version has this but currently you can’t restrict users to a specific project on the free version. Also, users can’t even reset their password (unless I’m missing something) • data exploration — there’s no easy way to filter on “images with label x ” ; You can filter on the “Annotations” field but that just gives you the raw JSON for some reason and it’s a pain to work with. Can be crafted as a SQL query but not on the UI. Same goes for more complicated filters like “has labels x and y but not z” • I’d also love a summary of my data/label distribution (e.g: only 5% of your dataset represents class z) • We deal a lot with large image files: ◦ For some reason LS takes a lot longer to load/render a large image than it should. ◦ I’d love a way to automatically tile large images and label the tiles then load those tiles into my model (this is something we do manually at the moment) • We deal a lot with metadata: ◦ Each image has a lot of information behind it that currently sits in a separate database ◦ Related to the dataset distribution I mentioned above, I’d love to be able to plug that data in LS and explore it • The complexity of LS’s architecture is making us hesitant to fork it and develop against it. For example, I am not sure why the database needs to be so involved but I’m sure there’s a good reason for it. • I have some complaints about the UI in places but in general it’s OK. We’re not on the latest version so perhaps some of my gripes have been addressed. The paid version is probably worth it just for the user management but as a small team that only has spikes of labelling throughout the year, the pricing is a bit prohibitive. Having said that, LS Enterprise is probably the cheapest out there from my research.

We’ll never have a perfect tool that has all the features we want, and LS gets close to it, but just wanted to see what else is out there.

Dan Morris (agentmorris@gmail.com)
2024-03-18 15:46:24

*Thread Reply:* This is a really good summary, thanks. This is consistent with the reason that for my own small projects - where I'm the one doing the labeling - I chose Label Me, even after being impressed by Label Studio. The issue I had with LS was primarily the architectural complexity, but it's complex because it's powerful and has all that user management functionality and the ability to handle every imaginable data source and modality, I don't think we can have simplicity and all those things. I also found the metadata management in LS - including just importing existing annotations - to be a bit of a hassle. Well-documented, but a hassle.

Sidebar: this isn't really related to LS, but in terms of running a model with automatic tiling of large images, check out SAHI (https://github.com/obss/sahi).

In any case, thanks for this summary! I don't think any of these problems are unique to conservation, and I'll still come down on the side of re-using off-the-shelf tools rather than re-building. Everyone is likely to have a slightly different set of priorities, but it's good that there's a growing number of stable labeling tools that hit different points on the spectrum from "agile but feature-limited" to "cumbersome but feature-rich" (Label Me and Label Studio are maybe total opposites on that spectrum).

Petar Gyurov (pgyurov93@gmail.com)
2024-03-19 05:56:04

*Thread Reply:* SAHI looks really interesting, thank you for sharing 👍

Urs (urs.waldmann@uni-konstanz.de)
2024-03-15 13:21:29

Send in your favorite animal papers to our CV4Animals workshop @CVPR by March 27!

Look forward to the exciting program and hope to see many animal enthusiasts in Seattle!

https://www.cv4animals.com

🙌 Sara Beery, Suzanne Stathatos, Jon Van Oast, Justin Kay, Yseult Hb, Mark Goldwater, Ishan Nangia, Subhransu Maji, Piotr Tynecki, Oisin Mac Aodha, Mitchell Rogers, Gengshan Yang, Levi Cai, Rohan Sawahn
Urs (urs.waldmann@uni-konstanz.de)
2024-03-24 08:22:21

*Thread Reply:* Deadlines extended!

Submission Deadline: 11:59 pm, April 9, 2024 (Pacific Time) Notification of Decision: April 29, 2024

Urs (urs.waldmann@uni-konstanz.de)
2024-03-24 08:22:30

*Thread Reply:* We will also provide a number of free conference and workshop registrations for the outstanding papers, thanks to our generous sponsors.

Ishan Nangia (ishannangia.123@gmail.com)
2024-03-15 15:09:44

Hey everyone. New here. I'm a conservation scuba diver + data scientist working on computer vision tasks for a couple of NGOs based out of India and diving for a marine conservation NGO here. Love the discussions happening here. Glad to have found this space! 🌊

🌊 Suzanne Stathatos, Ariel Chamberlain, Shir Bar, Yseult Hb, Arthur Caillau, Malte Pedersen, Rebecca Wilks
😎 Jon Van Oast, Gracie Ermi
🐟 Shir Bar
🙌 Ben Williams
Kalindi Fonda (kalindi.fonda@gmail.com)
2024-03-16 09:48:51

*Thread Reply:* Wow this is cool! Check #marine out. Curious to learn more about project, and any specific diving+data communities 🌊

Ishan Nangia (ishannangia.123@gmail.com)
2024-03-16 14:05:31

*Thread Reply:* Yup. Already joined it ^_^

I am working with Coastal Impact India where we have set up artificial coral reefs that we are now monitoring using AI. Using instance segmentation algorithms to detect and size up corals.

🌊 Kalindi Fonda
👍 Kishore Panaganti
:i_love_you_hand_sign: Tarun
Owen Xing (owenxing1994@gmail.com)
2024-03-17 21:29:27

Hi guys😆! My name is Owen Xing. I am a first-year PhD student at Griffith University, Australia. As you may be aware, koalas are Australia's symbol and iconic animal. I am currently engaged in a project entitled "Predicting Koala Road Crossing Behaviors using AI-Powered Observation Networks." to use camera traps to spot wild koalas in the suburban area. Recently, I came across the website climatechange.ai, which caught my attention due to its relevance and exciting content related to my research area. I have just joined this organization and am keen to connect with individuals who share a passion for integrating AI with conservation efforts, particularly in monitoring parks and wildlife. Looking forward to talking to everyone. ❤️

❤️ Ankita Shukla, Jason Holmberg (Wild Me), Wenyuan Zhang, Ishan Nangia, Cynthia Wu
🐨 Kalindi Fonda, Shir Bar, Chase Van Amburg, Elizabeth Campolongo, Wenyuan Zhang, Dan Morris, Holly Houliston, Cynthia Wu, Rebecca Wilks
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2024-03-17 21:57:37

*Thread Reply:* Welcome!

Jonah Fox (jonahfox@gmail.com)
2024-03-18 17:15:43

Hi - is anyone aware of any classifiers for habitats from aerial images? Open source if possible !

Nate Harada (nharada1@gmail.com)
2024-03-18 18:19:23

*Thread Reply:* Hey! Can you share more about your problem? Or even better, that plus a few example images and what you want the classifier to output for each one?

Ben Weinstein (benweinstein2010@gmail.com)
2024-03-18 18:22:12

With some prodding from @Dan Morris I'm writing a blog post for label studio on how we 1) cut large geospatial tiles into reasonable chunks that can be read by label-studio, 2) use foundation models to pre-annotate images using the label-studio python sdk, and 3) a couple useful utility functions for converting between geospatial (projected) and image-based (cartesian) coordinate representations of annotations. I would like some feedback on the how to make the explanations clearer. https://github.com/weecology/LabelStudio_BlogPost/blob/main/blogpost.ipynb

🎉 Jon Van Oast, Justin Kay, Dan Morris, Nate Harada, Timm Haucke, Sara Beery, Shir Bar, Jason Holmberg (Wild Me), Edward Bayes, Ishan Nangia, David Russell, Elizabeth Campolongo, Mitch Fennell, Thor Veen, Rebecca Wilks
Dan Morris (agentmorris@gmail.com)
2024-03-18 19:18:09

*Thread Reply:* Let the record show that I didn't just prod and run, I got Ben's tutorial running (proof in the image below!). Will send some minor suggestions via email, but this looks great!

If you need more prodding, I still think there's a Chapter 2 about how to do stuff in Label Studio, then a Chapter 3 where you show all the stuff you do beyond notebook-scale, but I wouldn't recommend actually writing either of those chapters, instead IMO they should be a presentation you give at the next appropriate conference/workshop/AI4RS seminar.

But the ability to do stuff like this is another reason we shouldn't re-invent the wheel; this is some serious power-user Label Studio stuff you're doing here.

🎉 Elizabeth Campolongo, Mitch Fennell
Finn J (finnjanson@gmail.com)
2024-04-10 18:51:48

*Thread Reply:* Really love this work!

Peter van Lunteren (contact@pvanlunteren.com)
2024-03-19 09:25:36

Hi! I'm looking into the feasibility of developing a sound recognition model that can detect human and mammalian sounds in Northern Namibia. The idea is to use this model to real-time notify park management about any detections. The goal is to identify (1) a general class for mammalian sounds (i.e., lion, elephant, antelope, hyena, canid, etc.) and (2) a general class for any human sounds (voice, vehicle, music, gunshots, etc.). Any other sounds will be labelled as background (birds, insects, rain, thunder, etc.). Does anyone know of certain datasets, sound-libraries, or repositories where I can find sound files for this project? A semi-extensive search online yielded in the following sources. At this point I'm just looking at global sound datasets, but the preference would be Southern Africa.

If you have any other recommendations or dos and don'ts, please let me know! Thanks in advance :)

Mammal:

Human:

Background (bird):

Background (insect):

Background (amphibian):

Background (ambient):

gbif.org
Zenodo
kaggle.com
Zenodo
Ben Williams (ben.williams.20@ucl.ac.uk)
2024-03-20 10:06:47

*Thread Reply:* FSD50K contains strongly labelled human sound events (+ a limited number of animal sound events) https://zenodo.org/records/4060432

Zenodo
👍 Peter van Lunteren
Oscar Schafer (oscar.schafer@bbc.co.uk)
2024-03-20 12:20:14

*Thread Reply:* You might wish to take a look at PANNs (paper) and inference Python package. We've used this to develop and trial audio monitoring (to detect human sounds) on BBC wildlife productions. There are some links to the Google AudioSet as well, which could be used if you're looking for more raw datasets.

👍 Peter van Lunteren
Peter van Lunteren (contact@pvanlunteren.com)
2024-03-21 11:55:12

*Thread Reply:* Perfect, thanks!

Finn J (finnjanson@gmail.com)
2024-04-10 18:52:20

*Thread Reply:* This is fascinating would love to learn more about your work, Peter

Isaac Badu (ikebaduas7@gmail.com)
2024-03-20 11:57:53

Hello everyone my name is Isaac Badu, I am currently a first year graduate student at Wake Forest university studying Biology with a focus in Quantitative Ecology. I am currently working on a project in Habitat Fragmentation and species coexistence, specifically I am looking at 1. how the matrix (non-habitat) affects dispersal behavior and mortality during dispersal of species in fragmented landscape and 2. do species coexist or exclude each other in patches after there has been fragmentation.

I am excited to be here and I look forward to chatting with you all.

🙌 Suzanne Stathatos, Jon Van Oast, Nicolas Arrieta Larraza, Shir Bar, Ishan Nangia, Brianna Rivera, Emilio Luz-Ricca
👋 Carly Batist, Alexander Merdian-Tarko
👀 Finn J
Brianna Rivera (rivvbri@gmail.com)
2024-03-20 14:46:46

Hi everyone!

I'm excited to join this space and wanted to introduce myself. My background primarily lies in life sciences, with a focus on biology. A few years ago, I completed a data science bootcamp, and since then, I've been honing my skills through personal projects. I'm eager to join the field of AI conservation. I'm interested in exploring any volunteering opportunities available. Looking forward to chatting with you all! 😊

🙌 Jon Van Oast, Dan Morris, Isaac Badu, Shir Bar, Ishan Nangia
👋 Carly Batist, Alexander Merdian-Tarko
Kit Lewers (krle4401@colorado.edu)
2024-03-21 15:44:04

Hello I'm new to this Slack, but I wanted to share that I will be running the AI session for the TDWG-SPNHC conference this year if anyone would like to submit a talk (or a talk to any other session). If you have any questions, please feel free to DM me 🙂 :

https://www.tdwg.org/conferences/2024/sessions/#sym15-emergent-ai-contributions-to-biodiversity-[…]ion-opportunities-challenges-and-a-year-in-review

tdwg.org
👀 Sara Beery, Ishan Nangia
🎉 Alan Stenhouse
Matt Ziegler (mattzig@cs.washington.edu)
2024-03-21 16:00:39

Hi everyone! Next week I'm presenting a webinar on "Designing Equitable Ocean Technologies," on Thursday March 28 at 3:30PT!

The topic is: new emerging technologies are starting to reshape ocean management and governance: like video/AI monitoring of fishing boats, ocean observation systems incorporating a variety of sensors, and increasingly complex fish stock models. However, we know that technologies have not always had equitable impacts: like uneven Internet/device access (the "digital divide,") and racial/gender biases in AI foundation models. So, how do we ensure that ocean management technology is fair to everyone?

https://oceannexus.org/webinar/

OCEAN NEXUS
🎉 Sara Beery, Elizabeth Campolongo, Malte Pedersen, Alessandra Sellini, David Will, Justin Kay
🌊 Alessandra Sellini, Nicolas Arrieta Larraza, Levi Cai, Justin Kay
Cameron Trotter (cater@bas.ac.uk)
2024-03-22 05:29:20

*Thread Reply:* Sounds interesting, will this be recorded? It's at 10:30pm my time 😬

💡 Nicolas Arrieta Larraza
Matt Ziegler (mattzig@cs.washington.edu)
2024-04-12 02:45:11

*Thread Reply:* It is recorded! https://www.youtube.com/watch?v=kJRAdUyGHzM

YouTube
} Ocean Nexus (https://www.youtube.com/@ocean_nexus)
❤️ Cameron Trotter, Erik Harden
Jarrett Blair (jarrettblair@gmail.com)
2024-03-25 17:39:41

Hi everyone! We (@Jamie Alison, @Quentin Geissmann, and others) are running ACCESS-2024, a funded and selective summer school on computational entomology (primarily imaging & computer vision). It will take place in Aarhus (Denmark) between September 30th to October 4th, 2024, and we are looking for motivated students to apply (deadline April 15th). Accommodation, food and tuition are paid for – selected candidates just need to fund their travel to and from the venue.

Additional info and application details can be found on our website https://darsa.info/ACCESS-2024/. Please circulate to interested people, or repost on X. 🙂

darsa.info
X (formerly Twitter)
❤️ Suzanne Stathatos, Shir Bar, Vinicius Amaral, Quentin Geissmann, Helena Russello, Carly Batist, Anton Alvarez, David, Joe Nangle, Magali Frauendorf, Justin Kay, Roberta Hunt
😎 Jon Van Oast, Xiaojuan Liu, Justin Kay
🐞 Kaitlyn Gaynor, Carly Batist, Joe Nangle, Kalindi Fonda, Justin Kay, Cara Appel, Alan Stenhouse
🦗 Shir Bar, David Russell, Kalindi Fonda, Justin Kay
🙌 Jamie Alison, Ben Williams, Justin Kay
Justin Kay (justinkay92@gmail.com)
2024-03-28 19:20:57

*Thread Reply:* So cool!

❤️ Jarrett Blair
Chris Lang (chrislang@ucsb.edu)
2024-03-27 17:32:26

Hi all! My name is Chris, I'm a software engineer at the Benioff Ocean Science Laboratory. We have been working on an object tracking model to count different types of trash/plastics that are collected on trashwheels in Baltimore Harbor. We have a unique opportunity to compare our model results with a dumpster dive event where volunteers will be manually sorting and counting the trash. Does anyone know of any papers or studies that have done something similar, comparing computer vision models to manual surveying methods, that might be good references for this type of comparison?

👋 Sara Beery, Chris Lange
Joe Ferdinando (jgf94@cornell.edu)
2024-03-27 21:30:00

*Thread Reply:* This paper from the Ocean Cleanup might be a useful starting point https://www.mdpi.com/2072-4292/13/17/3401

MDPI
🙏 Chris Lang
Chris Lange (s2125675@ed.ac.uk)
2024-03-28 09:57:42

*Thread Reply:* Hi Chris. Nice name.

😆 Chris Lang
Dan Stowell (dan.stowell@naturalis.nl)
2024-03-28 11:00:34

Postdoc job with us! Postdoctoral researcher (postdoc) in AI for acoustic biodiversity monitoring https://tiu.nu/22215 An EU-funded position using AI to monitor birds by their sound. (Please share!)

❤️ Suzanne Stathatos, Oisin Mac Aodha, Clare Price, Sara Beery, Alan Stenhouse
👀 Sam Lapp
Anton Alvarez (aalvarez@wwf.es)
2024-03-30 09:20:24

After reading this amazing paper, which frames the paradigm of application-driven ML (ADML) research—predominantly the type conducted in this Slack—and provides advice on reviewing, hiring, and teaching, I'm left wondering how you would like, or what you see as necessary, for your conservationist collaborators to improve dataset creation, task framing, and criteria for success... I believe many of us have encountered claims of having a well-annotated dataset, only to find out it wasn't quite so. But at the same time, sometimes weak labeling can be applied to significantly reduce the workload. Interdisciplinary collaboration is very necessary, so are you aware of any guidelines or resources that guide conservationists to better collaborate with ML professionals? Do you think it could be a useful? How do you usually do it?

The closest thing I'm familiar with is the guideline provided by @Dan Morris in his talk at CV4Ecology 2023.

YouTube
} CV4Ecology (https://www.youtube.com/@cv4ecology)
➕ Justin Kay, Mitchell Rogers, Andrew Schulz
😎 Jon Van Oast
Bernie Boscoe (boscoeb@sou.edu)
2024-03-30 12:52:14

*Thread Reply:* Hi Anton, thank you for sharing that excellent paper. I am thinking along the same lines, having done a similar exploration with astronomers https://arxiv.org/pdf/2211.14401.pdf and now working with conservationists. Happy to discuss!

Aran Dasan (aran@sntech.co.uk)
2024-04-02 03:34:23

*Thread Reply:* Non-expert here, setting up our first ‘applications-driven’ computer vision project. These were helpful to help me reach out to our CV-experts that we’re going to be working with 🙂 thanks for sharing!

Burooj Ghani (buroojghani@gmail.com)
2024-04-01 15:34:57

The DCASE 2024 Few-shot Bioacoustic Event Detection task is now live. Please participate and share.

https://dcase.community/challenge2024/task-few-shot-bioacoustic-event-detection

dcase.community
👍 Oisin Mac Aodha, Benjamin Hoffman, Dan Morris, Sara Beery, Aoife Toomey, Maddie Cusimano, mimi
💡 Nicolas Arrieta Larraza
Nina Grace Baranduin (ngb34@cam.ac.uk)
2024-04-04 11:22:49

Hi everyone 👋:skintone4: is anyone working with camera trap, bioacoustic or GPS data with carnivores? Or know any groups that are working with this kind of data? Specially interested in African Wild Dogs or Wolves but would be great to hear about others too

Jenna Kline (jennamkline@gmail.com)
2024-04-04 11:33:41

*Thread Reply:* Hi @Nina Grace Baranduin I'm working with a group here at Ohio State studying the African Wild Dog pack at the The Wilds here in Ohio. We are gathering data with camera traps and drones.

Carly Batist (cbatist@gradcenter.cuny.edu)
2024-04-04 12:46:27

*Thread Reply:* check out the folks at WildCru! And African Wildlife Conservation Fund. Zambian Carnivore Programme too

Namitha Suresh (ns873@cornell.edu)
2024-04-04 15:49:56

*Thread Reply:* Hi @Nina Grace Baranduin! Also a group including me at Cornell studying dholes! We're working with bioacoustic data!

Evan Eskew (eveskew@gmail.com)
2024-04-10 13:36:16

*Thread Reply:* I believe the Abrahms lab at UW (https://www.abrahmslab.com/) may also be doing some of this with wild dogs?

abrahmslab.com
Maricela Abarca (myabarca@dons.usfca.edu)
2024-04-05 14:06:48

Hello everyone 👋:skintone3: I'm currently a data science graduate student, with a background in ecology. Glad to see this space and community that meshes two things I am eager to learn about and advance solutions for!

One of my final projects for school involves thinking about biodiversity monitoring solutions. I'm collecting answers from folks to these questions:

What specific problems are you trying to solve ? What are some challenges you face in solving these problems? What resources would help you overcome these challenges? What kinds of technology supports your goals? What are some risks in regard to the success of your goals and how do you mitigate them?

If anyone might have some time to answer all or any combination of these with a couple sentences, that would really help me out! Feel free to DM me.

Looking forward to engaging with everyone on frontiers in AI conservation! Maricela

🎉 Jon Van Oast, Dunrie Greiling
🌊 Ishan Nangia
👋 Aran Dasan
Sara Beery (sbeery@caltech.edu)
2024-04-08 14:24:27

https://magazine.caltech.edu/post/what-computer-vision-can-tell-us-about-the-natural-world

Caltech Magazine
❤️ Avi Sundaresan, Ruth Oliver, Oisin Mac Aodha, Eric Greenlee, Negar Sadrzadeh, Justin Kay, Omiros Pantazis, charlotte, Dylan Van Bramer (she/her), Anton Alvarez, Ishan Nangia, Mitch Fennell, Andrew Schulz, Cara Appel, Finn J, Chase Van Amburg, Tuan-Anh VU, Aoife Toomey, Alan Stenhouse, Magali Frauendorf, G. Andrew Fricker
👍 Holger Klinck, Dan Morris, Subhransu Maji, Burooj Ghani, Nico Lang
😎 Jon Van Oast
👏 Robin Sandfort, Aakash Gupta
Jes Lefcourt (jeslefcourt@gmail.com)
2024-04-09 18:51:08

Hi everyone! The EarthRanger team is hiring! We're looking for a really good Python developer / architect to join our team. For those not familiar, EarthRanger is a philanthropic initiative by the Allen Institute for Artificial Intelligence (AI2), a non-profit founded by the late Paul Allen, co-founder of Microsoft. EarthRanger is a free software solution that helps conservationists make informed operational decisions for wildlife monitoring and protection by integrating real-time data from wildlife tracking devices, ranger radios, field sensors, etc, etc, etc and providing a comprehensive view of everything going on within a protected area or other area of interest. We're working at about 500 sites in 70 countries, and adding about 1 site per day. Come join us to help protect animals and their habitats! https://www.linkedin.com/jobs/view/3877119361/?refId=Un8cRlQBSPm0Y%2FhbsMjqCg%3D%3D&trackingId=Un8cRlQBSPm0Y%2FhbsMjqCg%3D%3D

linkedin.com
❤️ Kevin Rineer, Subhransu Maji, Irina Tolkova, Sara Beery, Arthur Caillau, Jennifer, Chase Van Amburg, Alan Stenhouse
👍 Dan Morris, Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2024-04-09 21:43:22

*Thread Reply:* @Nate Harada

Patrick Beukema (patrickb@allenai.org)
2024-04-10 10:41:06

Is there a good meta-analysis/review paper/recent survey on classifying/understanding/predicting elephant (or similar) behavior from GPS sequences/transponders, either forecasting or more seq2seq based?

Rebecca Wilks (R.C.Wilks@sms.ed.ac.uk)
2024-04-18 09:36:49

*Thread Reply:* Whilst not a review, this paper used GPS tracks to classify bull elephant behaviour into Musth/Non-Musth state via HMMs (state space models) 🐘 https://www.savetheelephants.org/wp-content/uploads/2019/06/2019_Taylor_et_al-2019JAE_reproductivetactictsofbulls.pdf

Matt Hron (matt.hron@wildlifeprotectionsolutions.org)
2024-04-10 17:23:29

Hi everyone! Wildlife Protection Solutions is hiring a full-stack software developer (Python/React) with a focus on AI/ML for wildlife conservation. See the attached doc, the post on WildLabs, or contact info@WildlifeProtectionSolutions.org for more details about the position. Thanks!

wildlabs.net
Finn J (finnjanson@gmail.com)
2024-04-10 18:47:54

Hi everyone, keen to meet and learn from you all. I'm a Data Scientist, focused primarily on Responsible AI in Healthcare and Sustainability. I'm researching public health, biodiversity and water management in Mexico (particularly Chiapas). If anyone has any connections or experience with sustainability or environmental studies in Mexico (or other Latin American regions), do reach out!

Erik Harden (raster365@gmail.com)
2024-04-10 20:53:56

Hello everyone! Glad to see there's a large community in wildlife conservation using AI. I am a senior at my university and working on a fine tuning the Mdv5 model from Megadetector to classify certain species. I already have an annotated dataset of around 3000 pictures split up into testing, training, and validation. What are the next steps to fine the model?

Thank you in advance for any help 🙂

👋 Ankita Shukla, Kevin Rineer, Sara Beery, Valentin Gabeff, Dan Morris
Dan Morris (agentmorris@gmail.com)
2024-04-10 22:11:52

*Thread Reply:* If MegaDetector is working well and you want to add species classes, 99% of the time I would advise you not to fine-tune MegaDetector. Instead, consider training a separate classifier that operates on the pixels inside the MegaDetector boxes. MEWC offers a good step-by-step process for doing this:

https://github.com/zaandahl/mewc

The only cases where I would advise fine-tuning MegaDetector are the cases where MD itself isn't working well on your species, but this is a more substantial process, and you'll need to come up with bounding boxes somehow, and 3000 images is unlikely to be enough. But, FWIW, if you want to venture into the unknown, I've done this a couple times, specifically for those cases where MD struggles (in both cases with reptiles):

https://github.com/agentmorris/unsw-goannas https://github.com/agentmorris/usgs-tegus

Those repos give some instructions about exactly what I did, but it's a pain!

Dan Morris (agentmorris@gmail.com)
2024-04-10 22:15:09

*Thread Reply:* Clarifying: the "fine-tuning MD" part is easy, not because there's anything special about MD, but because fine-tuning YOLOv5 is easy. It doesn't even require any code, the YOLOv5 CLI is amazing. But creating a dataset of bounding boxes is a PITA, so I wouldn't do that unless you need to.

👍 Sara Beery
Erik Harden (hardene@sou.edu)
2024-04-10 20:56:43

Hi everyone. I am the same person as above ^. Signed into the wrong email 😅

✔️ Jon Van Oast, Kevin Rineer
Ben Weinstein (benweinstein2010@gmail.com)
2024-04-12 18:56:00

I'm starting to play around with overhead images that lack georeferencing where we want to estimate the unique number of individuals per species in a set of imagery. Consider these two images captured from a piloted flight past a bird colony and a hand-held camera. Human reviewers can use the background textures to try to estimate overlapping areas, and then manually decide whether to count the birds in that portion of the photo. Its fairly trivial using old-school computer vision to stitch them (see example with matching keypoints), but you can easily imagine that stitching together a large mosaic will naturally warp the images, especially as the plane turns, making downstream machine learning difficult. Instead of stitching then predicting, i'm thinking about this as a 're-id' problem, except for the fact that my objects are not visually different, like whales or tigers. By using the background context, we should be able to match up duplicate detections, assuming the photos are taken in rapid succession. We can then use more general deep learning methods like (https://openaccess.thecvf.com/content/ICCV2021W/DSC/papers/BansalWhereDidISeeItOb[…]anceRe-IdentificationWithAttentionICCVW2021paper.pdf). My idea is to use our existing object detection models, which are pretty good, and then search all corresponding images for matching background context. I know others are thinking about this space (perhaps @Lasha Otarashvili, I can see a annotation tool in Scout), all thoughts welcome. I am working with my annotation team to curate a dataset, but I sense there are others out there already. Helpful data would be more airborne and fuzzy objects, and less tiger stripe camera trap data. They feel like different problems. I think this problem of unordered, non-curated sequences of airborne photos could be useful to a large monitoring audience (e.g African Savannah herds @Ben Koger, @Howard L Frederick, Coral Reefs, and other large aggregations)

👍 Sara Beery, Howard L Frederick, Justine Boulent
Dan Morris (agentmorris@gmail.com)
2024-04-12 19:05:52

*Thread Reply:* Oooh this is one of those fun threads where we can place our bets on what will actually work, then you'll tell us how wrong we were. I see three approaches here: (1) stitch then detect (using something like SAHI to divide inference into patches), (2) detect on the original images, register the images and translate the detections accordingly (without explicitly producing a mosaic), and use basically-off-the-shelf NMS to remove redundant detections, and (3) the fancy stuff you're suggesting where you explicitly search for matching context.

My bet is that object detection will be pretty robust to warping, as long as you train it on a representative distribution of warping artifact, so my bet is on (1) being the best bang for your buck, followed by (2), and I think (3) will also work, but will get the same results for lots more code.

If you have to train on data that won't see a lot of distortion from the registration, but you expect a lot of distortion at inference time, then my bet switches from (1) to (2).

I've placed my bets!

👍 Ben Weinstein, Sara Beery
Levi Cai (lcai@whoi.edu)
2024-04-12 19:19:47

*Thread Reply:* I would personally start with some off the shelf multi-object tracking approaches. Assuming you have high-enough frame rates, these should generally work. Something like DeepSORT or whatever the current SOTA variant of that is. I do like the idea of adding a little bit of context though around each detection to help with the matching portions.

Levi Cai (lcai@whoi.edu)
2024-04-12 19:27:33

*Thread Reply:* I see, I guess a potential issue here is the constraint of working with un-ordered data. Though assuming you had a raw video, this shouldn't really be true? Also if you are assuming you have enough overlap, then stitching as you suggested (or fancy things like Metashape or COLMAP or something) should get you an ordering, and then you can leverage existing MOT pipelines.

Ben Weinstein (benweinstein2010@gmail.com)
2024-04-12 19:37:33

*Thread Reply:* not raw video, just hand held camera photos. think a series of 10-20 photographs.

Levi Cai (lcai@whoi.edu)
2024-04-12 20:11:33

*Thread Reply:* Ah I see, so not at all high framerate. My initial thought is (1) and (2) are probably a bit more blended. I think with (2), you could align them by stitching, but only to come up with an "ordering" and treating as a sequence, and then use MOT-style matching with inflated detection regions, you might get some nice robustness properties. With (1) I'm not sure how you would handle the actual stitch without running a detector over the individual images first anyway?

I would be curious if you're able to just flat out take the inflated detections and do a DeepSORT ranking and thresholding and just see if they match up (maybe with a bunch of discrete rotations). I think this is already fairly similar to your proposed approach, but just comparing detections between all images, rather than searching through the whole image?

👍 Ben Weinstein, Sara Beery
Jose Ruiz-Munoz (jfruizmu@unal.edu.co)
2024-04-12 21:28:13

*Thread Reply:* Interesting. It sounds like a special case of multi-view object segmentation

Sara Beery (sbeery@caltech.edu)
2024-04-13 18:12:59

*Thread Reply:* This is assuming the animals don't move at all?

➕ Timm Haucke
Ben Weinstein (benweinstein2010@gmail.com)
2024-04-15 12:01:27

*Thread Reply:* I think they are allowed to move in small areas, but obviously not in major ways. The images are usually captured in rapid sequence, like through an airborne transect as a human photographer passes. I'll keep sharing in this thread as I proceed, thanks everyone for feedback.

👍 Sara Beery
Sara Beery (sbeery@caltech.edu)
2024-04-15 12:59:35

*Thread Reply:* I could imagine that this would work better for ecosystems with some types of background texture than others, very curious to hear how things go!

👍 Ben Weinstein
Howard L Frederick (simbamangu@gmail.com)
2024-04-16 04:06:46

*Thread Reply:* @Ben Weinstein this is indeed exactly the kind of thing we are interested in for the Scout annotation system (and others). I think circling a large colony or herd is a different problem but figuring out adjacency and image footprint all help solve the same problem:

  1. We often need to de-duplicate observations in adjacent images along flight path.
  2. Sometimes we need to just track who’s been seen in one image and figure out which members of a herd were occluded by vegetation (or neighbours) in the adjacent images.
  3. We typically at a minimum have ordered images from the aircraft (sometimes with GPS and IMU data), and usually can be confident of 40-50% overlap between image pairs.
Howard L Frederick (simbamangu@gmail.com)
2024-04-16 04:14:10

*Thread Reply:* Matching adjacent oblique images with keypoints from a moving platform seems to work really poorly with SIFT / SURF - but @Hannes Naude (Innoventix, South Africa) has a tool for footprint matching between adjacent obliques that uses ‘lightglue’ and ‘superglue’ - no need for IMU data, just checks timestamps or GPS to confirm a given pair is worth trying to match.

Howard L Frederick (simbamangu@gmail.com)
2024-04-16 04:15:38

*Thread Reply:* Nifty little demo of matching available here: https://huggingface.co/spaces/Realcat/image-matching-webui

@Ben Weinstein trying the superglue match with superglue / RANSAC on your 2 images gives 155 matches.

huggingface.co
Howard L Frederick (simbamangu@gmail.com)
2024-04-16 04:17:54

*Thread Reply:*

Hannes Naude (naude.jj@gmail.com)
2024-04-16 04:39:39

*Thread Reply:* Hi all.

Thanks for pulling me in to this fascinating topic @Howard L Frederick.

Yes, this sounds like a problem that might be handled well by the registration pipeline we have in Detweb (machine guided annotation and analysis tool that we originally built for ESS, but that we are currently trying to open up to more general use). It uses approach (2) in Dan's list, which works well for game surveys (both oblique and Nadir views) in the absence of stationary content in the images (visible wings, struts, wands etc. cause it to fail pretty quickly, We expect this to be fixable with some basic masking, but that is not implemented yet). I'm a little worried about scaling for the number of sightings you need to match in a dense bird colony (we use optimal assignment aka Munkres algorithm internally which scales as O(N^2)), but am keen to give it a go. Can you share a handful of representative images somewhere? I can pull it into the system and let you know how well it works or how badly it fails.

Hannes Naude (naude.jj@gmail.com)
2024-04-16 08:25:03

*Thread Reply:* OK, So I quickly started a new project in Detweb, uploaded your images and (manually) annotated the birds I saw. I then launched an image registration task and it seems to handle the matching quite well.

I include a little screencast that shows how this works.

🙌 Howard L Frederick, Dan Morris, Petar Gyurov
Howard L Frederick (simbamangu@gmail.com)
2024-04-17 04:21:37

*Thread Reply:* @Dan Morris I’m betting #2 on your list.

Ben Weinstein (benweinstein2010@gmail.com)
2024-04-17 09:46:43

*Thread Reply:* Awesome, where can I read more about the tool/strategy, is it possible for me to extract the relevant part of that code, or atleast get a sense for the general steps. I can generate a few more image pairs today. I am working on an annotation workflow for my team to generate eval data.

Ben Weinstein (benweinstein2010@gmail.com)
2024-04-17 10:26:52

*Thread Reply:* Thanks @Howard L Frederick, I made a discussion on that repo, let's see what the maintainer says. https://github.com/Vincentqyw/image-matching-webui/discussions/28. Do you have sample overlapping images with annotations that we can add to an eval set? just one or two.

Category
General
👍 Jose Ruiz-Munoz
Hannes Naude (naude.jj@gmail.com)
2024-04-18 04:00:00

*Thread Reply:* @Ben Weinstein Sure, I've been promising for years now to open source the code, but I haven't gotten so far (mostly because it doesn't live up to my own standards for how clean open sourced code for general consumption should be and adding features has always outranked doing the long overdue cleanup). Last night I decided to just take the leap so the uncleaned code has been pushed to github.

So the problem of re-identifying multiple sightings as corresponding to the same individual is broken up into two parts:

  1. Figuring out how the two (for present purposes) images relate to one another. We assume that we are dealing with aerial images from a significant height so this can be well approximated with a Homography.
  2. Given the relation between images figuring out which sightings are paired with which and which sightings are unpaired (either because an animal was spotted in image A and missed in image B, or because an incorrect sighting was made in image A which does not match to anything in image B) The user will then typically need to manually resolve the second category. Detweb tries to do as much as possible directly in the frontend (written in React) and just push results to serverless services on AWS (DynamoDB mostly). However part 1 above requires CUDA so it is implemented as a ECS container image with all the dependencies pre-installed, which gets launched automatically when the user pushes image registration work to a SQS queue. The relevant code is in https://github.com/WildEyeConservation/Detweb/blob/develop/cdk/containerImages/lightGlueImage/code/processSQS.py and uses LoFTR to do the matching. A much cleaner example (without all kinds of AWS integration and housekeeping clutter) is in https://github.com/WildEyeConservation/Detweb/blob/develop/cdk/containerImages/lightGlueImage/code/lightglueMatch.py You seem to be familiar with traditional keypoint matching, so there will be nothing new here for you. The only change is to use SuperPoint (rather than SIFT/SURF) for keypoint detection/description and LightGlue for matching. From that point on you can use the matched points for whatever (typically throw them into RANSAC to estimate a homography).

The most relevant code for part 2 lives in the React frontend, namely https://github.com/WildEyeConservation/Detweb/blob/develop/src/useOptimalAssignment.jsx Here I use the Munkres algorithm to calculate an optimal assignment between pairs and create a new set of annotations that are displayed to the user for confirmation. This code is currently being actively worked on and is not really commented. Most of the complexity here has very little to do with the algorithms and mostly with a self-taught React developer wrangling React to try and get an acceptable UX. 😳

I'd be happy to answer any questions you may have.

👍 Ben Weinstein
Ben Weinstein (benweinstein2010@gmail.com)
2024-04-18 14:34:09

*Thread Reply:* Sure, i'll respond in DM not to clutter thread.

Ben Weinstein (benweinstein2010@gmail.com)
2024-04-25 12:42:20

*Thread Reply:* For those interested in following this thread, here is where I am in terms of a reproducible example. https://github.com/weecology/DoubleCounting

🙌 Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2024-04-25 17:47:41

*Thread Reply:* And with an image. This is using simple serial homography among images as described by @Hannes Naude. I still want to explore the 3d approach since it makes fewer assumptions, but for far away images and rapid sequence, this is a reasonable start. I will be wrapping this into DeepForest for users to remove double counts among a series of images.

👍 Sara Beery
Ben Weinstein (benweinstein2010@gmail.com)
2024-04-25 17:48:31

*Thread Reply:* If others here have sample data I can run through this, i'd like to write up a blog post. @Howard L Frederick

Ben Weinstein (benweinstein2010@gmail.com)
2024-05-01 14:30:36

*Thread Reply:* For those following along on me trying to reduce double counting in non-uniformly overlapping airborne photos combined with object detection, here is the first example of the success. In blue are original detections that are deemed doubles, pink are detections that are maintained. The selection rule here is 'left-hand' meaning that when two detections overlap, we choose the earlier image. I have also written 'right-hand' and 'highest score'. I am starting to doc this if it useful to others, it should only be considered a simple solution and serve as a baseline for more sophisticated approaches. https://github.com/weecology/DoubleCounting

Language
Python
Last updated
a minute ago
Marius Miron (marius.miron@earthspecies.org)
2024-04-15 08:55:13

Dear AI for Conservation Community,

VIHAR-2024 is the fourth international workshop on Vocal Interactivity in-and-between Humans, Animals and Robots, a satellite event of Interspeech 2024. It will take place in a hybrid format and will be hosted in Kos, Greece on 6th September 2024 and online on 9th September 2024. VIHAR-2024 aims to bring together researchers studying vocalization and speech-based interaction in-and-between humans, animals and robots from a variety of different fields. VIHAR-2024 will provide an opportunity to share and discuss theoretical insights, best practices, tools and methodologies, and to identify common principles underpinning vocal behavior in a multi-disciplinary environment. We invite original submissions of 5-page papers (with the 5th page reserved exclusively for acknowledgements and references) or 2-page extended abstracts in all areas of vocal interactivity. Accepted papers will be compiled in the VIHAR-2024 proceedings that will be published online. All papers should follow the Interspeech 2024 template. Suggested workshop topics may include, but are not limited to the following areas: • Self-supervised learning for vocal signals • Generative audio systems for vocal interactivity • Function of vocalizations: discovery of information embedded within signals, testing for functional reference, linking vocal signals to behavior • Physiological and morphological comparisons between vocal systems in animals and humans • Vocal imitation and social learning of vocal signals • Valence and emotion in vocal signals • Inter and intraspecies comparative analyses of vocalizations • Interspecific vocal interactivity between non-conspecifics. • Speech perception and production in human-human interactions and human-robot interactions • Comparative analysis of vocal signals in vocal interactivity. • Theory development of vocal interaction interfaces Submission instructions can be found at the EasyChair submission page: https://easychair.org/conferences/?conf=vihar2024 Important dates (AoE): • Submission deadline : 9th June 2024
• Notification of acceptance: 8th July 2024
• Final versions for inclusion in proceedings: 15th July 2024
• Author registration closes: 31st July 2024
• Workshop: 6th and 9th September 2024
This event is sponsored by Earth Species Project (https://www.earthspecies.org/ ) and supported by the VIHAR steering committee (http://www.vihar.org/ ) and the International Speech Communication Association (http://www.isca-speech.org/).

Organizing committee: Marius Miron, Yossi Yovel, Sara Keen, Eliya Nachmani, Paola Peña, Björn Schuller, Olivier Pietquin

charlotte (deshchang@gmail.com)
2024-04-15 10:00:04

For people planning to attend the North American Congress on Conservation Biology, the early bird registration deadline has been extended from today to April 26 (FMI here). For folks who are interested in social-ecological / coupled human-natural systems and NLP, please take note of our !

Sako Arts (sako@fruitpunch.ai)
2024-04-16 06:23:49

Hi All, we have a new cool FruitPunch AI for Conservation Challenge coming up and we are still looking for experts in bioacoustics! https://app.fruitpunch.ai/challenge/ai-for-forest-elephants-2 In AI for Forest Elephants we aim to build an AI to detect, elephant rumbles, gunshots and vehicle noises in collaboration with Cornell's Elephant listening project where we use an enormous data set of audio recordings collected from a grid present in the Central African rainforest. If you are an expert you can be of real help! The Challenge starts the 30th of April and will last for 10 weeks, hit the Apply as Expert button to join in!

😎 Jon Van Oast, Dan Morris
❤️ Marius Miron, Ishan Nangia, Anton Alvarez, Aakash Gupta, Ed Miller
🐘 Alexander Merdian-Tarko, Ed Miller
Tom August (tomaug@ceh.ac.uk)
2024-04-17 08:58:43

Can anyone recommend a good paper that reviews how classification scores are created, and how they should be interpreted, for a non ML audience. I'm interested in regards to interpreting output from image classifiers. Thanks in advance

Eric Price (eric.price@ifr.uni-stuttgart.de)
2024-04-20 05:40:13

*Thread Reply:* https://arxiv.org/abs/2106.04972 https://towardsdatascience.com/how-to-use-confidence-scores-in-machine-learning-models-abe9773306fa https://towardsdatascience.com/aleatoric-and-epistemic-uncertainty-in-deep-learning-77e5c51f9423 https://blog.paperspace.com/aleatoric-and-epistemic-uncertainty-in-machine-learning/ https://arxiv.org/abs/1910.09457

arXiv.org
Medium
Reading time
14 min read
Medium
Reading time
4 min read
Paperspace Blog
Written by
Adrien Payong
Filed under
Theory, Deep Learning, Machine Learning
arXiv.org
Dan Morris (agentmorris@gmail.com)
2024-04-21 12:13:15

*Thread Reply:* If you're looking for a general tutorial, Eric's links look great. If you're looking for something specific to conservation, I don't know of anything that's really a tutorial, but this paper is about confidence scores in the context of camera traps, so the introduction maybe isn't far from what you're looking for:

https://www.biorxiv.org/content/10.1101/2023.11.10.566512v2.full

Ghazi Randhawa (muhammadghazirandhawa@gmail.com)
2024-04-20 12:36:29

Are there any people here or who know of research groups who work at the intersection of AI & Data Science, regulatory Economics\Law, and Conservation topics?

charlotte (deshchang@gmail.com)
2024-04-23 18:30:21

*Thread Reply:* Hey Ghazi - I’m sure that others can chime in with better examples, but I know of a few teams that fulfill at least 2 out of the 3 dimensions (AI/DS + Conservation/Environment in particular).

  1. Lea Berrang-Ford’s collaboration
  2. A multi-institution food systems project led by Jaron Porciello
  3. A project on mapping climate impacts from the Mercator institute
  4. At the risk of tooting my own horn, I’ve co-led a project with The Nature Conservancy using LLMs to categorize and map evidence for natural climate solutions and their impacts on people and nature. The first paper in this body of work is currently under review. All of these projects largely focus on using NLP (and LLMs / transformer models) in particular to examine published (and sometimes gray lit) datasets at a large scale. I haven’t seen research groups working across AI + conservation + law in particular, but I’d be curious to hear about examples.
Nature
Nature
Nature
Carly Batist (cbatist@gradcenter.cuny.edu)
2024-04-22 10:19:20

https://x.com/lauransotomayor/status/1781739493579522500

X (formerly Twitter)
😎 Timm Haucke, Sara Beery, David Russell, Maricela Abarca
🎉 Jon Van Oast, Sam Lapp, Clemens Mosig, Alan Stenhouse
Carly Batist (cbatist@gradcenter.cuny.edu)
2024-04-23 11:12:21

> Today, Meta and World Resources Institute are launching a global map of tree canopy height at a 1-meter resolution, allowing the detection of single trees at a global scale. In an effort to advance open source forest monitoring, all canopy height data and artificial intelligence models are free and publicly available. https://sustainability.fb.com/blog/2024/04/22/using-artificial-intelligence-to-map-the-earths-forests/

Meta Sustainability
Written by
10updianne, Jamie Tolan, Camille Couprie, John Brandt, Justine Spore, Tobias Tiecke, Tracy Johns, Patrick Nease
Est. reading time
7 minutes
🌳 Shir Bar, Arthur Caillau, Katie Breen, Patrick Beukema, Omiros Pantazis, Maricela Abarca, Burak Ekim, Enis Berk Çoban, Ben Weinstein, Jose Ruiz-Munoz, Timm Haucke, Prabath Gunawardane, Risa Shinoda, Clemens Mosig, mimi, Alexander Merdian-Tarko, Toryn Schafer, Ishan Nangia, Casey Youngflesh, Chase Van Amburg, Jane Wu, Eric Greenlee, Alan Stenhouse, Edward Bayes
🎉 Jon Van Oast, Tiziana Gelmi Candusso, Jane Wu
🙌 Danica Stark, Jane Wu, Arky
Nate Harada (nharada1@gmail.com)
2024-04-23 11:29:18

*Thread Reply:* This is huge! The DinoV2 backbone is super powerful and having one that’s pre-trained on satellite and aerial data is amazing. It will certainly generalize well beyond trees.

🙌 Carly Batist, Edward Bayes
💯 Carly Batist, Maricela Abarca
Ben Weinstein (benweinstein2010@gmail.com)
2024-04-23 14:41:08

*Thread Reply:* @Nate Harada do you want to show a fine-tuning example? We could rig up some bird data from https://zenodo.org/records/5033174

Nate Harada (nharada1@gmail.com)
2024-04-23 15:02:45

*Thread Reply:* yeah was gonna try and play with this when I have some time

👍 Ben Weinstein
Sonny Burniston (sonnyburniston@yahoo.co.uk)
2024-04-23 16:04:44

*Thread Reply:* Quite interested in this proposed use case above. What would the outputs of this proposed fine tuning look like?

Katie Breen (cbreen@uw.edu)
2024-04-24 07:10:37

*Thread Reply:* Thank you for sharing Carly! I am also curious in working with fine-tuned applications!

Nate Harada (nharada1@gmail.com)
2024-04-26 16:38:56

*Thread Reply:* Hmm so I’m seeing a developing opinion that the actual tree height map isn’t that good, curious if people who actually need to do canopy height estimation have opinions on that specific task. I’m looking more specifically at using the backbone only as a starting point for other tasks, for example few-shot classification for wildlife or tree species detection.

Carly Batist (cbatist@gradcenter.cuny.edu)
2024-04-26 17:21:05

*Thread Reply:* @Nate Harada could you link to places you’ve seen those opinions? or is just in-passing type convos you’ve heard

Nate Harada (nharada1@gmail.com)
2024-04-26 17:54:42

*Thread Reply:* Some was word of mouth, but a few tweets (obv take w a grain of salt cause some are competitors):

https://twitter.com/AdPscual/status/1783515862365831623 https://twitter.com/TC_Chakraborty/status/1783839579457614165 https://twitter.com/arjenvrielink/status/1783795483585970442

I have no skin in the game for this particular application, just sharing what I’ve seen/heard. Would be interested to hear people finding the model useful for their problems! I’d love if the backbone was broadly useful, personally.

X (formerly Twitter)
X (formerly Twitter)
X (formerly Twitter)
Carly Batist (cbatist@gradcenter.cuny.edu)
2024-04-26 18:36:13

*Thread Reply:* interesting feedback!

Depanshu Sani (depanshus@iiitd.ac.in)
2024-04-24 09:18:52

Hey everyone🎊 I'm glad to be a part of this community! I am a Ph.D. student affiliated with the Vision Lab at IIIT Delhi, India. I just realized that this might be a good space to showcase our recent work on AI for Agriculture, recently accepted at IEEE WACV 2024 as an oral presentation.

SICKLE: A Multi-Sensor Satellite Imagery Dataset Annotated with Multiple Key Cropping Parameters 🌾 What is SICKLE? 📡 It is a first-of-its-kind dataset constituting a time-series of multi-resolution imagery from 3 distinct satellites: Landsat-8, Sentinel-1 and Sentinel-2. Our dataset constitutes multi-spectral, thermal and microwave sensors during the January 2018 − March 2021 period in the Cauvery Delta region of Tamil Nadu, India.

🌾 What are the key features of the dataset? 📡

  1. The dataset consists of images from multiple satellites having a variety of sensors.
  2. Each agricultural plot is annotated with multiple cropping parameters, for example, crop type, crop yield and phenology dates, also allowing multi-task learning.
  3. The annotations are available at 3 scales, i.e. 30m, 10m and 3m, also allowing high-resolution inference from low-resolution satellite data.
  4. The dataset is organized in a way that can be readily used by researchers from multiple domains, including agronomy, remote sensing and machine learning.
  5. Each temporal sequence in the dataset is constructed by considering the cropping practices followed by farmers primarily engaged in paddy cultivation.
  6. Tasks that can be performed: semantic segmentation, crop phenology date prediction, yield prediction, panoptic segmentation, synthetic band generation, image super-resolution, cross-satellite sensor fusion, etc. 🌾 Why is this dataset important? 📡
  7. Monitoring the cropping pattern is a composition of multiple tasks, for instance, crop classification followed by yield estimation. SICKLE is the first work to provide annotations of multiple cropping parameters for the same set of plots.
  8. Heterogeneous farming is followed in the study region having different growing seasons. Moreover, the primary crop (i.e. paddy) is grown 2-3 times in a year. Thus, the straight-forward way to use a time-series of satellite images (say all images from Jan-Dec) is not suitable. We present a novel strategy for preparing time-series data over a seasonal temporal window that is consistent with the regional cropping standards that are typically followed by the farmers for crop production.
  9. Majority of the plots in this study region are small farms, with more than 95% plots having an area of less than or equal to 1 acre and an average size of 0.38 acres, thus making prediction tasks more challenging for low and medium resolution satellite images. In case of any queries, please reach out to me at depanshus@iiitd.ac.in. Also checkout our short YouTube video to get a high-level overview of our work. If you think this work might be beneficial for you, please visit our official website.
YouTube
} Depanshu Sani (https://www.youtube.com/@depanshu-sani)
👋 Konstantin Klemmer, Dan Morris, Sara Beery, Holly Houliston, Risa Shinoda, Eric Greenlee, Don Cosseboom
🙌 Ishan Nangia, charlotte, Don Cosseboom
Hugo Magaldi (magaldi.hugo@gmail.com)
2024-04-26 04:01:02

[ML questions for Species Classifier]

Hi everyone ! I'm building a species classifier out of camera traps images and I am looking for advice on data related questions, as I'm new to the conservation field 🙂 • is there common practice to deal with class imbalance in the training data ? I'm using weights so far so that each class has equal ponderation • same, is there a standardized way to remove pseudo-duplicates (consecutive images from a same video clip/sequence where the animal has barely moved) ? I'm experimenting with FiftyOne Thank you in advance!

Carly Batist (cbatist@gradcenter.cuny.edu)
2024-04-26 09:07:11

*Thread Reply:* @Dan Morris

Depanshu Sani (depanshus@iiitd.ac.in)
2024-04-26 11:26:45

*Thread Reply:* @Hugo Magaldi I am also new to the conservation area, so I'll give my opinion from the perspective of a core-ML/CV person.

  1. You can try out focal loss instead of cross-entropy loss. The focal loss can simply be thought of as a variation to the cross-entropy loss, where a higher weightage is given to the samples that are hard to classify (in this case, the samples that are under-represented). It is a pretty standard and easy approach to deal with class-imbalance.
  2. I am not sure if I understood the term pseudo-duplicate correctly. This might be because of my lack of awareness about the FiftyOne. But I am assuming that by pseudo-duplicate you mean that the same image is replicated in the dataset multiple times but with slight variations, such as cropping or rotating the image. If this is the case, I'd suggest you to first analyze the dataset. If you find that the class imbalance is just because of these pseudo-duplicates and without these duplicates the dataset would have been somewhat balanced, then you might not even have to employ the focal loss. Instead, you can implement a class-specific augmentation technique. By this I mean that for the under-represented classes in your dataset, you can virtually add more samples for such classes by applying random augmentations. A simple example of this could be that if you have two classes in your dataset, say X and Y, and only because of these pseudo-duplicates 80% of the samples in the dataset belong to X. Then in such a case, you can achieve a balanced dataset by randomly cropping, rotating and flipping the images that belong to class Y, such that the two classes have the same proportion of images. I hope this helps. Let me know in case you have any further questions.
👍 Hugo Magaldi
Dan Morris (agentmorris@gmail.com)
2024-04-26 11:53:16

*Thread Reply:* Re: pseudo-duplicates... do you mean images from the same sequence/burst that are just similar, i.e. where an animal hasn't moved between images, or literal image copies? If you mean images from the same sequence/burst where the animal hasn't moved, my recommendation would be to leave them in, unless you're really limited by training cycles.

Re: rare classes... I don't have a very useful technical answer; class weighting seems reasonable to me, and more machine-learning-y people than me can weigh in on weighting schemes and loss functions. But I'll give an unsolicited non-technical answer: I think sometimes we worry too much about performance on rare classes in a way that is detrimental to overall system performance. If you are training a real-time classifier whose only job is to report the instance of rare classes, by all means, weight those heavily. Similarly, if you have your sights set on full automation, that's... ambitious, but then yes, you need to perform well on rare classes and it might be worth paying a price in accuracy on common classes.

But for the more common scenario where you're training a classifier to help someone process a gazillion camera trap images, more often than not the benefit you provide to the user will be entirely dominated by having high precision and adequate recall on the most common classes, even if their focal species are rare, and the marginal cost of having the user review the rare classes manually is very small. I'm not telling you not to focus on rare class performance, just reminding you to consider the use case and to evaluate whether hurting performance on common classes is worth the lift to performance on rare classes (there's no free lunch!).

👍 Sara Beery, Hugo Magaldi, Shir Bar
Hugo Magaldi (magaldi.hugo@gmail.com)
2024-04-26 12:22:33

*Thread Reply:* @Depanshu Sani Thank you for your answer. I will try out focal loss. By pseudo duplicates I meant very similar images coming from the same sequence. But indeed data augmentation can be useful for underrepresented classes.

Hugo Magaldi (magaldi.hugo@gmail.com)
2024-04-26 12:39:50

*Thread Reply:* @Dan Morris Thank you for your insights. Taking into account the time for user review, much smaller for rare classes, is a good point. The example I had in mind is a classifier for 5 species of monkeys, with 1 specie making up for 90% of the dataset. The other 4 are not rare species, just less observed, and I still aim for a good accuracy on all 5 classes.

Dan Morris (agentmorris@gmail.com)
2024-04-27 19:54:01

Boring (but significant) update from the LILA-verse:

  1. LILA data is now hosted on GCP, Azure, and AWS. All three copies are identical.
  2. The only action item is for anyone who has URLs lying around that point to LILA data on Azure: all Azure URLs have changed for reasons that aren't very interesting. See dataset pages for new URLs. The old URLs will stop working any day now.
  3. Huge thanks to the Google Cloud Public Datasets Program, the Microsoft AI for Good Lab, and Source Cooperative, who are providing hosting on GCP, Azure, and AWS, respectively. Multi-cloud hosting is hopefully helpful for anyone working on this data on the cloud (since now with high probability the data is hosted on the cloud you're working on), but just as important, this helps with long-term stability, i.e. this data should be available on the cloud for the foreseeable future, even if any one of these hosting options disappears.

@Sara Beery and @Timm Haucke are also maintaining an offline copy at MIT, so that just in case by some catastrophe LILA loses support from all three cloud providers at the same time, it won't vanish.

And last but not least, there's also a huge pile of hard drives under my desk with all of LILA on them, so I can still work on camera trap stuff after the zombie apocalypse.

If anyone wants to see all of LILA described in one super-long markdown file, the Source Cooperative documentation is here:

https://beta.source.coop/repositories/agentmorris/lila-wildlife/description/

Thanks to GCPD, AI4G, SC, and Sara/Timm!

🙌 Justin Kay, Viktor Domazetoski, Timm Haucke, Bernie Boscoe, Felipe Parodi, Neha Hulkund, Shir Bar, Ștefan Istrate, Mitchell Rogers, Scott Smith, Jason Holmberg (Wild Me), Hugo Magaldi, Aniruddha Saha, Sara Beery, Maricela Abarca, Chase Van Amburg, Dylan Van Bramer (she/her), Matt Hron, Fagner Cunha, Holly Houliston, Carly Batist, Nico Lang, Elizabeth Campolongo, Enis Berk Çoban, Suzanne Stathatos, Malte Pedersen, Jose Ruiz-Munoz, Mitch Fennell, Toryn Schafer, Lauren Harrell, Yuval Boss, Casey Clifton, Edward Bayes, Anton Alvarez, Sam Lapp
:male_zombie: Devis Tuia, Sara Beery, Valentin Gabeff, Mitch Fennell, Aakash Gupta
🎉 Jon Van Oast
🙌:skin_tone_3: Alan Stenhouse
👍 Piotr Tynecki, Sepand Dyanatkar
Dan Morris (agentmorris@gmail.com)
2024-04-28 18:03:05

*Thread Reply:* Also, in case anyone was wondering how MegaDetector does on zombies, here are MD results for more or less the first hit for "zombie apocalypse" on Google images:

😂 Mitchell Rogers, Burak Ekim, Shir Bar, Carly Batist, Elizabeth Campolongo, Mitch Fennell, Alan Stenhouse, Piotr Tynecki
🧟 Jon Van Oast, Sara Beery
Sara Beery (sbeery@caltech.edu)
2024-04-30 14:59:33

https://twitter.com/sarameghanbeery/status/1785382847135760472

X (formerly Twitter)
❤️ Ankita Shukla, Oisin Mac Aodha, Shir Bar, Sonny Burniston, Elizabeth Campolongo, Timm Haucke, Gustavo Perez, charlotte, Alessandra Vidal Meza, Gabriel Manso, Dylan Van Bramer (she/her), Omiros Pantazis, Alan Stenhouse, Yuerou Tang, Andrew Schulz, Sam Heinrich, Rebecca Wilks
😎 Jon Van Oast, Timm Haucke
🙌 Maricela Abarca
👍 Piotr Tynecki
Oisin Mac Aodha (macaodha@caltech.edu)
2024-04-30 15:03:02

*Thread Reply:* For those that don't use twitter:

Online citizen science platforms like iNaturalist and Macaulay Library contain a wealth of images but are hard to search using text. We are looking for ideas so we can develop the next generation of AI models that can help users to better search these image collections.

We are currently building a new dataset that consists of wildlife images combined with text descriptions. We want this dataset to be representative of the real world interesting questions that citizen scientists, ecologists, researchers, etc. want to answer.

As we get started, we hope to collect ideas from the community about what types of image searches they would be interested to perform. If this is of interest, please fill out this short form: https://forms.gle/CmRf826AkJPkvhjX7

PS You can fill it out more than once.

❤️ Sara Beery, Jon Van Oast, Elizabeth Campolongo, charlotte, Dylan Van Bramer (she/her), Omiros Pantazis, Konstantin Klemmer, Andy Viet Huynh, Kevin Rineer, Kakani Katija
Kakani Katija (kakani@mbari.org)
2024-05-01 12:26:44

FathomVerse is now available to download for free on the App Store and Google Play!

Download on the App Store

FathomVerse is a cozy, community science experience that will take you on an amazing journey through the depths of the ocean. Not only that, it also helps improve AI models that scientists use to discover ocean life. FathomVerse allows anyone with a smartphone or tablet to take part in ocean exploration and discovery.

• 🎮 Play minigames to interact with real images collected by researchers and discover ocean animals. • 🔎 Hone your skills and learn how to visually identify 40+ groups of ocean animals. • ❣️Save your favorite images and curate a personal gallery. Join us on an ambitious journey to create a game that connects the power of community with cutting-edge technology for the benefit of ocean life. Dive deeper into the FathomVerse community on Instagram, TikTok, and join our Discord.

Thanks to lots of members of the AI for Conservation community, including @Sara Beery @Oisin Mac Aodha @gvanhorn @Tanya Berger-Wolf @Genevieve Patterson for input and support.

Let us know what you think!

Onwards and downwards!

FathomVerse
App Store
play.google.com
🦈 Ben Weinstein, Elizabeth Campolongo, Lauren Harrell, Timm Haucke, Levi Cai, charlotte
🐟 Oisin Mac Aodha, Enis Berk Çoban, Gracie Ermi, Shir Bar, Lauren Harrell, Timm Haucke, Levi Cai, Cameron Trotter
❤️ Sergei Nozdrenkov, gvanhorn, Lauren Harrell, Timm Haucke, Tarun, Levi Cai
🦀 Chris Lange, Lauren Harrell, Timm Haucke, Levi Cai, Maricela Abarca
🐠 Mitchell Rogers, Levi Cai
👍 Piotr Tynecki
PG (premsgill@gmail.com)
2024-05-01 14:02:52

Hi All. Great to part of this community. I'm curious to hear what version control and backup workflow people have in this community when we may be dealing with large assets beyond code (e.g., imagery, 3D assets, video) that Git isn't suited for.

Patrick Beukema (patrickb@allenai.org)
2024-05-01 14:43:02

*Thread Reply:* Our team uses git-lfs for test cases for computer vision use cases because I like the github repo being the single source of truth for a given project. We store our data on GCP typically (in buckets) — and we do version control that (Google will do that for you albeit for an additional cost). For metadata/annotations we either release via a bucket, or on GH directly if they are small enough. We have also looked at (but don't currently use) dvc.org which we found was not optimal at the time for larger datasets (TB/PB), but that may have changed since.

One useful abstraction that is worth thinking about is decoupling the human/machine annotations of your data from the raw data. In our cases, we get raw data from a lot of public sources (like NASA/earth data/ESA), and it may be most convenient for users to access the raw data from those APIs directly (although not always)

👍 PG
Elizabeth Campolongo (e.campolongo479@gmail.com)
2024-05-03 13:40:51

*Thread Reply:* We use Hugging Face as a model and dataset repository and archive (they partnered with DataCite to provide DOIs). Larger files are stored through git-LFS (at no cost), and it has a lot of nice integration features through huggingface_hub for loading models and datasets, and even running demos of various models and projects. This also makes it easier for integration with code in GitHub; you can also load assets through URLs. The UI isn't quite as nice as GitHub for active development (PRs and branches have less clear, easily accessible features since they're merged into one), but overall it's been working well for us.

huggingface.co
Patrick Beukema (patrickb@allenai.org)
2024-05-01 14:39:30

Hey all, question for the group about measuring daylight. We are currently training some models for maritime intelligence that would benefit from the amount of daylight at that point on earth (mostly on the ocean/high seas). Also we need it to run at reasonable scale cheaply (100M inferences/day on a small CPU ideally). Does anyone know what the best library is for that? I came across https://astral.readthedocs.io/en/latest/ which seems pretty decent but I don’t really know this space. (Also this will be used in other contexts — like on land for wildlife monitoring, if that is relevant().

Chris Doehring (chrisdo@earthranger.com)
2024-05-01 14:52:25

*Thread Reply:* You might look at Ephem too.

PyPI
🙏 Patrick Beukema
Siddharth Gupta (emailsiddha@gmail.com)
2024-05-01 19:34:51

Hello!! I'm a high school student super interested in this area - what's the best way for me to get involved?

Tom Ratsakatika (trr26@cam.ac.uk)
2024-05-02 08:03:55

Hi everyone! Has anyone implemented DeepFaune's classification model in their own project? I'm currently trying to use their model for a automated camera trap-based alert system for bears and wild boar in Romania. It would be extremely helpful to see how others have managed to get the pre-trained models to work in their own pipeline, if anyone is willing to share their code!

Ref: https://plmlab.math.cnrs.fr/deepfaune/software/-/tree/master

GitLab
Tom Ratsakatika (trr26@cam.ac.uk)
2024-05-02 08:06:06

*Thread Reply:* @Piotr Tynecki - any insights in addition to your posts hugely appreciated! https://wildlabs.net/discussion/successfully-integrated-deepfaune-video-alerting-system

wildlabs.net
Sara Si-Moussi (sara.si-moussi@univ-grenoble-alpes.fr)
2024-05-02 11:18:19

*Thread Reply:* @Vincent Miele CNRS

Ed Miller (ed@hypraptive.com)
2024-05-02 16:41:07

*Thread Reply:* Are you working with @Thijs and Hack the Planet?

✅ Thijs
Piotr Tynecki (piotr@tynecki.pl)
2024-05-06 05:23:10

*Thread Reply:* @Tom Ratsakatika I will share the post update this week.

👍 Tom Ratsakatika
Piotr Tynecki (piotr@tynecki.pl)
2024-05-10 01:40:00

*Thread Reply:* @Tom Ratsakatika did you saw this reply? I shared the code snippet in PM.

Tom August (tomaug@ceh.ac.uk)
2024-05-03 12:29:05

€2,000 travel grants are available for researchers interested in insect monitoring using automated cameras and computer vision and all the topics that surround these. Grants are available to anyone from a COST member country (https://www.cost.eu/about/members/) which is Europe + some others.

• Grants cover travel, accommodation, food, and drink • You cover the awesome science Find out more: www.insectai.eu

🙌 Casey Clifton, Carly Batist, Ishan Nangia, Anton Alvarez
👍 Piotr Tynecki
John Dziak (dziakj1@gmail.com)
2024-05-06 21:59:06

I wanted to pass this along in case anyone was interested: "OCTO is pleased to announce that it will host:

Webinar: Netting the Future: AI's Role in Sustainable Fisheries Across the Indo-Pacific Presented by: Stuart J. Green of Blue-Green Advisors and Farid Maruf of USAID-SUFIA-TS, Tetra Tech Date/Time: Tuesday, May 28, 9 am US EDT/6 am US PDT/1 pm UTC/2 pm BST/3 pm CEST/8 pm WIB Description: Artificial Intelligence (AI), Advanced Analytics (AA), and Machine Learning (ML) can be transformational in promoting fair, legal, and sustainable fisheries management across the Indo-Pacific region. This webinar will delve into the key findings of the recent USAID report "Applying AI/AA/ML in Promoting Fair, Legal and Sustainable Regional Fisheries Management in the Indo-Pacific Region." This webinar will explore emerging technological solutions that show potential in overcoming barriers to sustainable fisheries management and enhancing monitoring, analysis, and enforcement mechanisms. These innovative technologies have the potential to revolutionize fisheries management, ensuring ecological sustainability and economic viability for coastal communities. Hosted by: OCTO Register: https://us02web.zoom.us/webinar/register/WN_afY6fSTyRCamUa3bYnc3Kw "

👍 Justin Kay
Peter van Lunteren (contact@pvanlunteren.com)
2024-05-07 02:54:03

Does anyone know of an equivalent of MegaDetector for drone imagery?

I'm asking because the combination of working with MegaDetector as a feature extractor to locate the animals in camera trap images and then sending the crops through a project-specific classifier has proven very valuable. I was wondering if a similar pipeline of an object detector and classifier is also the standard for camera trap imagery. What is the current state of species identification models for drone imagery, and are there any open-source models I can experiment with?

After some very-very-limited tests, it looks like MegaDetector v5a actually is pretty good in locating the animals in drone images (if the images are in colour and not in infrared, and if they are properly zoomed in). Just wondering if I'm going in the right direction here...

Thanks in advance!

Eric Price (eric.price@ifr.uni-stuttgart.de)
2024-05-07 04:01:13

*Thread Reply:* For drone videos, we worked on software for video annotation. It has an inbuilt detector for initial detection and then uses a tracker to track the animals across consecutive images. ( https://github.com/robot-perception-group/smarter-labelme / http://doi.org/10.1007/978-3-031-44981-9_12 ) maybe that could help in your case. It comes pre-trained with the MS-COCO object detection classes but once you have a few hundred images annotated you can re-train the detector including those, which should make it much better.

Stars
14
Language
Python
🙌 Peter van Lunteren
Eric Price (eric.price@ifr.uni-stuttgart.de)
2024-05-07 04:03:05

*Thread Reply:* don't hesitate to send me a message if you have questions about installation or usage

Christoph Praschl (christoph.praschl@fh-hagenberg.at)
2024-05-07 06:35:10

*Thread Reply:* Hi @Peter van Lunteren, we are actually also working on such kind of things within our research project BAMBI (https://www.bambi.eco/). Currently, we are still finishing the labelling work. But within the next months we plan to release some animal detection models trained on some thousands of individual animals, visible in ten of thousands of video frames 🙂 Maybe there is some way of collaboration there 🙂

🙌 Peter van Lunteren
Sara Beery (sbeery@caltech.edu)
2024-05-07 11:08:11

*Thread Reply:* The team at WildMe has been working on this as well, @Jason Holmberg (Wild Me)

🎉 Jon Van Oast, Jason Holmberg (Wild Me)
🙌 Peter van Lunteren
Ben Weinstein (benweinstein2010@gmail.com)
2024-05-07 12:31:25

*Thread Reply:* @Peter van Lunteren you can train multi-class models easily through our API as well. A general 'animal' detector is something we have discussed, but not taken up. https://deepforest.readthedocs.io/en/latest/. We have found starting from our 'bird' model you can train to other animal like deer in just a few annotations.

🙌 Peter van Lunteren
👍 Piotr Tynecki
Peter van Lunteren (contact@pvanlunteren.com)
2024-05-09 02:54:14

*Thread Reply:* Thanks for the tips - this is great! I'll play around with the options and get in touch if I have specific questions. This slack channel is very valuable 🙂

❤️ Sara Beery
Steve Haddock (haddock@mbari.org)
2024-05-13 19:56:00

*Thread Reply:* @Danelle Cline 👀

✅ Danelle Cline
Christoph Praschl (christoph.praschl@fh-hagenberg.at)
2024-05-07 06:42:40

Hi everyone,

some of you have probably heard of the workshop series on "Camera Traps, AI and Ecology". The workshop was held at the University of Jena (https://inf-cv.uni-jena.de/camtrap-ws/) last year in a hybrid format, respectivley online the times before (https://camtrapai.github.io/indexold.html).

This year we are going to continue to host the workshop in a hybrid format. Like that, we are pleased to invite you to the fourth international workshop on "Camera Traps, AI and Ecology," which will be held this coming September on the Hagenberg campus of the University of Applied Sciences Upper Austria (FH Oberösterreich). We are currently accepting submissions and greatly value your contributions!

This workshop is part of the international series dedicated to using artificial intelligence to monitor (wild) animals and address ecological issues. The event aims to bring together experts from various fields, including data providers such as nature parks and conservation areas, scientists such as ecologists and conservationists, and AI and Data Science experts.

We aim to promote exchange between these communities, link ecological data and AI methods, and initiate new interdisciplinary projects. The workshop will be held in a hybrid format (live and online), including invited lectures, paper presentations, and discussions.

We would be delighted to welcome you as participants at this workshop. You can participate as a listener or actively present your submission! Accepted publications will be published in the form of workshop proceedings (including DOI). In addition, we would appreciate it if you could forward this invitation to your network.

Here are the details of the workshop: Date: 05.09. – 06.09.2024 (Paper Deadline: 28.06.) Homepage: https://camtrap2024.fh-ooe.at/ Location: Hagenberg im Mühlkreis, Austria or Online There will be no registration fees for participating on-site or online!

The journey to the picturesque Mühlviertel and Linz offers additional advantages during the workshop. From September 4th to 8th, the Ars Electronica Festival, the world's largest festival at the intersection of art, technology, and society (which will be part of our social event), as well as the Linzer Klangwolke on September 7th, will take place.

If you have any further questions or need additional information, please do not hesitate to contact me directly or via camtraps2024@fh-hagenberg.at .

We look forward to welcoming you to this workshop!

With best regards, Christoph

On behalf of the rest of the organisers: David Schedl (FH Upper Austria) Paul Bodesheim (University of Jena) Tilo Burghardt (University of Bristol)

camtrapai.github.io
camtrap2024.fh-ooe.at
🙌 Jennifer, Irina Tolkova, Sara Beery, Majid Mirmehdi, Dan Morris, Talia Speaker, Malika Nisal Ratnayake
😎 Jon Van Oast
❤️ Otto Brookes
👍 Piotr Tynecki
Tom August (tomaug@ceh.ac.uk)
2024-05-08 04:39:43

Yesterday the special issue on automated insect monitoring came out in Philosophical Transactions of the Royal Society. There are papers on computer vision, acoustic monitoring, radar and molecular methods. It also has a lot of opinion pieces and perspective pieces that are relevant to all of these technologies. For example dealing with the needs of insect monitoring in low and middle income countries, data sharing, citizen science and long term monitoring. https://royalsocietypublishing.org/toc/rstb/current

Roel van Klink is compiling a folder with all PDF's. Feel free to download everything here, and share the link: https://www.dropbox.com/scl/fo/zjsc6oyghabhs5lngl14o/AK9MFw0GQawiMRDKYVa65Fs?rlkey=b5uofp8ogg37og2nc0jwofty1&dl=0 [text taken from a post by Roel in another Slack workspace]

Dropbox
👍 Georgia Atkinson, Shir Bar, Sara Beery, Thijs van der Plas, Justin Kay, Carly Batist, Ishan Nangia, Talia Speaker, Benjamin Hoffman, Piotr Tynecki, Isabel Fenton, Morgan Langley, Malika Nisal Ratnayake
Molly Blank (mblank@naturalstate.org)
2024-05-08 13:13:52

Hi, everyone - I understand that introductions are encouraged for newcomers. 👋:skintone2: Delighted to have been pointed this incredible resource (thanks, @Dan Morris!).

:flag_ke: I’m working at Natural State, a Kenya-based non-profit developing biodiversity survey platforms that scale across landscapes to facilitate more types of conservation funding. 🥾 Our approach to AI in conservation in the near-term requires experts in landscapes and people on the ground, so we’re keen to use tools that partner well with human-in-the loop process (e.g. sorting empty images, annotation tooling for building training data). 📷 I work primarily in Subsaharan landscapes with ongoing projects in central/Northern Kenyan rangelands, and succulent Karoo. Our bread and butter data types are camera trap images, bioacoustic recordings, and oblique aerial images with remote sensing in the works. 💥 Fun fact: we're finalists in the upcoming Rainforest XPRIZE to showcase some of our data management methods designed with our Kenyan field team.

Looking forward to learning with you all!

😎 Jon Van Oast, Jason Holmberg (Wild Me), Timm Haucke
🙌 Nicolas Arrieta Larraza, Dan Morris, Shir Bar, Bernie Boscoe, Ishan Nangia, Talia Speaker, Jason Holmberg (Wild Me), Kalindi Fonda, Timm Haucke, Patrick Beukema
👋:skin_tone_2: Cara Appel
👋 Timm Haucke, Dhruvin Vora, Viktor Domazetoski, Maricela Abarca, Darshana Salvi
👍 Piotr Tynecki, Kishore Panaganti
Dan Morris (agentmorris@gmail.com)
2024-05-08 13:41:08

*Thread Reply:* And in a bizarre confluence of universes, Molly's father taught me everything I know about op-amps (which still isn't a lot, but whatever I know, it's thanks to Molly's dad).

🌊 Kalindi Fonda
😎 Jason Holmberg (Wild Me)
Ben Williams (ben.williams.20@ucl.ac.uk)
2024-05-14 05:59:43

I'm on the lookout for labs experienced in digital twinning, especially those that have incorporated computer vision techniques. If anyone is aware of any such labs, particularly those that have or are interested in applying these methods to conservation efforts, then please do share 😊

Carly Batist (cbatist@gradcenter.cuny.edu)
2024-05-14 10:05:25

*Thread Reply:* this might be of interest - https://biodt.eu/

BioDT
🙏 Ben Williams
Joe Nangle (joe.nangle@portfoliot.com)
2024-05-14 11:25:18

*Thread Reply:* Take a look at this as well.

Brian Mayton gave our team of the tour and sensor network, and it's pretty impressive stuff! Possibly more audio-oriented than your computer vision focus, but maybe still of interest?

MIT Media Lab
🙏 Ben Williams
Ben Williams (ben.williams.20@ucl.ac.uk)
2024-05-14 13:07:06

*Thread Reply:* Thank you both, fascinating projects! Any other recommendations welcomed

Gerald Maschmann (gerald_maschmann@fws.gov)
2024-05-15 13:22:16

👋 Hi everyone!

Gerald Maschmann (gerald_maschmann@fws.gov)
2024-05-15 13:22:41

I’m a fish biologist with the US Fish and Wildlife Service in Fairbanks, Alaska. I run projects that use underwater video cameras that record salmon and other fish species passing through fish weirs. Technicians then view the video recordings and ID and count the passing fish. We use this data to estimate escapement and spawning abundance of salmon, which is then used by fish managers to manage the various salmon fisheries.

I’ve started to explore the possibilities of using A.I. to ID and count my fish for me. So, I’m here to learn more about what A.I. can do and what it can’t do. AND possibly connect with someone who could start putting together a software package that we could use.

I look forward to connecting with you. Thanks

👋 Declan, Justin Kay, Don Cosseboom, Ben Weinstein, Malte Pedersen, Timm Haucke, Sara Beery, Risa Shinoda, Enis Berk Çoban, Shir Bar, Subhransu Maji, Maricela Abarca
🙌 Ben Williams, Mohamed Elhoseiny
Ben Weinstein (benweinstein2010@gmail.com)
2024-05-15 13:30:30

*Thread Reply:* https://www.frontiersin.org/articles/10.3389/fmars.2023.1200408/full, but also, https://arxiv.org/pdf/2207.09295 @Justin Kay

Frontiers
➕ Justin Kay
Gerald Maschmann (gerald_maschmann@fws.gov)
2024-05-15 13:51:46

*Thread Reply:* This is great, thank you!

Anna Willoughby (arwill19@gmail.com)
2024-05-15 14:14:58

*Thread Reply:* One of the graduate students at my school, Christian Swartzbaugh (css36162@uga.edu), is using AI to count and track individuals of two different fish species in experimental tanks. He's advised by Stacy Lance (https://www.lancelab.org/). I'm sure you could reach out to either of them. Attached is the abstract from our annual symposium where he presented on it.

Gerald Maschmann (gerald_maschmann@fws.gov)
2024-05-15 14:18:45

*Thread Reply:* Thank you!

Timm Haucke (timm@haucke.xyz)
2024-05-15 14:30:37

*Thread Reply:* I‘m involved in a similar project primarily counting river herring, led by Robert Vincent at MIT (rvincent@mit.edu). Feel free to reach out!

❤️ Justin Kay, Sara Beery
Dan Morris (agentmorris@gmail.com)
2024-05-15 14:51:20

*Thread Reply:* [shamelessly pasting my reply to a similar message on the help_needed channel]

I have been maintaining a list of all the publicly-available models for finding fish in imagery that I'm aware of:

https://github.com/agentmorris/agentmorrispublic/blob/main/fish-datasets.md#publicly-available-models-for-fish-detection

@Filippo Varini and I have also been keeping track of public datasets suitable for training models like this:

https://github.com/filippovarini/filippo_public/blob/master/fish-datasets.md

👍 Gerald Maschmann, Sara Beery, Filippo Varini, Malte Pedersen
Gerald Maschmann (gerald_maschmann@fws.gov)
2024-05-15 14:52:21

*Thread Reply:* Thanks Timm, I would appreciate that. Should I reach out to you or Robert?

Timm Haucke (timm@haucke.xyz)
2024-05-15 15:08:18

*Thread Reply:* @Gerald Maschmann feel free to reach out to Robert first, he knows best who to include in the conversation

Thor Veen (thor@aeria.ai)
2024-05-15 15:34:53

*Thread Reply:* Hi @Gerald Maschmann, we started the work on a full stack analysis pipeline for the project described in Frontiers by Atlas et al. Happy to chat what we are doing ed.

David Russell (davidrussell327@gmail.com)
2024-05-15 16:04:24

*Thread Reply:* Another option is VIAME which is a toolkit that's used by a number of the NOAA fisheries offices. I was briefly involved in the development a few years back and am happy to answer basic questions or connect you with the people still working on it at Kitware.

David Russell (davidrussell327@gmail.com)
2024-05-15 16:08:24

*Thread Reply:* One important features is it's designed specifically to make it easy to train models for your own tasks.

Gerald Maschmann (gerald_maschmann@fws.gov)
2024-05-15 17:02:27

*Thread Reply:* Ok, thank you! This is great. I will definitely be in touch. Right now I'm in the middle of planning for the salmon season, so it might be later this summer or even this fall before I circle back.

Filippo Varini (fppvrn@gmail.com)
2024-05-15 19:58:18

*Thread Reply:* Hi @Gerald Maschmann, I advice you to also check out the new MBARI open source models on hugging face. Here.

You can play around with a few examples in this interactive dashboard

huggingface.co
huggingface.co
❤️ Tarun
Chris Lange (s2125675@ed.ac.uk)
2024-05-16 07:22:07

*Thread Reply:* Here is some recent work that focuses on detecting fish in sonar imagery using AI, and improving performance at new locations you haven't trained on. The authors are very knowledgeable and have been working on salmon detection. Might be a good idea to reach out to them.

https://arxiv.org/abs/2207.09295

https://aldi-daod.github.io/

arXiv.org
aldi-daod.github.io
Tom Wye (Fishial.ai) (twye@fishial.ai)
2024-06-04 11:25:45

*Thread Reply:* @Gerald Maschmann check out fishial.ai there a demo at https://portal.fishial.ai/search/by-fishial-recognition the system can be used to collect and label image of fish. The segmentation model is spot on.. If you want to talk email support@fishial.ai

portal.fishial.ai
Robert Dawes (robert.dawes@bbc.co.uk)
2024-05-20 08:40:06

Does anyone have any insights into what are the current best classifications networks to use for animal species identification? We currently fine tune an inception v3 network - but we haven't refreshed our approach for 3 or 4 years so I'm wondering if there are now better architectures and base models around.

(This is all assuming that successful detection has already localised your animal in the image)

Robert Dawes (robert.dawes@bbc.co.uk)
2024-05-20 08:43:10

*Thread Reply:* Incidentally, on a side note, I've had a play with chatGPT4o and the initial results looks pretty impressive. I asked it how it was getting the results and it said (if you can trust it) that it was making use of the text description that you might see in a spotter's guide. Which is an interesting approach for making performance of AI results more human understandable.

Sara Beery (sbeery@caltech.edu)
2024-05-20 09:10:13

*Thread Reply:* @Zhongqi Miao has been looking at this recently 🙂

Nicolas Arrieta Larraza (n.arrieta.larraza@gmail.com)
2024-05-20 10:15:45

*Thread Reply:* Hi hi! Is there any limitation/constraint you would need to take into account? (model size, inference speed,...)

Nowadays, I would say ResNet-like architectures are good enough 👍:skintone2:.

Nevertheless, in my personal opinion, good performance comes down to other factors such as: good quality data, loss function choice and a suitable data pipeline, among others.💡

👍 Robert Dawes
Nate Harada (nharada1@gmail.com)
2024-05-20 11:17:01

*Thread Reply:* Suggest a pretrained transformer like DinoV2 or SigLIP unless you have a ton of labeled data or strict latency requirements

👍 Robert Dawes, Gaspard Dussert, Valentin Gabeff
Sara Beery (sbeery@caltech.edu)
2024-05-20 11:25:59

*Thread Reply:* Data choices still almost always matter more than architecture

✅ Nicolas Arrieta Larraza, Bernie Boscoe, Piotr Tynecki, Edward Bayes, Kyra Swanson
👍 Robert Dawes
Piotr Tynecki (piotr@tynecki.pl)
2024-05-20 13:23:34

*Thread Reply:* @Robert Dawes You can train and benchmark YOLO-based architectures (YOLOv8, YOLOv9, YOLO-NAT, etc.) with Vision Transformers (ViT) like RT-DETR, or try a novel approach based on Visual Language Models, such as CogVLM or Large Multimodal Models like PaliGemma.

All of these approaches will provide interesting and accurate results for your issue, but as @Sara Beery said, the key to your focus should be the gold standard dataset and smart + eco-based data stratification before launching fine-tuning stage.

👍 Robert Dawes
Robert Dawes (robert.dawes@bbc.co.uk)
2024-05-20 15:34:54

*Thread Reply:* Interesting to hear about these transformer based approached (which of course didn't exist when we created our original pipeline)

Robert Dawes (robert.dawes@bbc.co.uk)
2024-05-20 15:38:11

*Thread Reply:* Have you got any tips to ensure our data is good as possible? Or any suggest papers that have followed a good approach?

Robert Dawes (robert.dawes@bbc.co.uk)
2024-05-20 15:39:23

*Thread Reply:* And in answer to the performance issues, we previously used Inception v3 because it was very quick and cheap - and didn't require much compute. This time round we have scope to be a bit more flexible.

Zhongqi Miao (zhongqi.miao@berkeley.edu)
2024-05-20 15:48:46

*Thread Reply:* It really depends on how you define "better". I personally is still using resnet18 for most of my classification projects because i don't need the extra 2% to 5% improvement from the architectures. And the majority issues are from data itself like imbalance and noise, which can't be solved by model architectures. Ultimately, we will rely on human in the loop, so as long as we have a good calibration pipeline, it is applicable. So what is in your mind that can make your current pipeline better? That Would be a good starting point.

👍 Nicolas Arrieta Larraza
Piotr Tynecki (piotr@tynecki.pl)
2024-05-20 15:50:25

*Thread Reply:* @Robert Dawes One good tip is to collect, store and process the data in Camtrap DP standard (paper), especially if you have data from many sources from many research groups.

Robert Dawes (robert.dawes@bbc.co.uk)
2024-05-21 05:40:20

*Thread Reply:* @Zhongqi Miao - our current pipeline has classifiers for a couple of domains which we've been using for a few years. We've now got some work that would involve expanding to look at other domains/locations and could also bring in some new training sets. So we felt that if we're going to do some more training we ought to try and at least think about what model we're training.

What "better" means could actually be open for debate. It might be just better classification, but it could also mean being quicker to retrain with new data or being as cheap as possible.

Robert Dawes (robert.dawes@bbc.co.uk)
2024-05-21 05:42:42

*Thread Reply:* At the very least we should compare something like resnet to our current model to see what improvements or issues there are

Zhongqi Miao (zhongqi.miao@berkeley.edu)
2024-05-21 12:50:07

*Thread Reply:* @Robert Dawes I see! I think in your case, it is more of a domain generalization task. First of all, in terms of a place to check new models, https://paperswithcode.com/sota/image-classification-on-imagenet, this leaderboard is always a good place to start, and you will see almost all models on the leaderboard are transformer models with huge parameters and huge pretraining or some contrastive pretrainings. Transformers are also notoriously hard to fine-tune. So I don't know whether these model will be useful for your project. However, these models, because they are trained on large data, are usually good at domain generalization on general domain data. But for camera traps, I have doubts, because the domains are so different from general images. You can still see resnet-152 on the leaderboard though.

paperswithcode.com
Zhongqi Miao (zhongqi.miao@berkeley.edu)
2024-05-21 12:51:25

*Thread Reply:* On the other hand, since it is a domain generalization tasks, I think searching for domain generalization targeted methods might be a more effective way. Even collecting enough training data from different domain can be an option I think

Robert Dawes (robert.dawes@bbc.co.uk)
2024-05-21 13:04:37

*Thread Reply:* I’ve wondered in the past how good a benchmark ImageNet is for wildlife classification given that most objects in there don’t have the characteristics of wildlife - deformation etc

Robert Dawes (robert.dawes@bbc.co.uk)
2024-05-21 13:07:01

*Thread Reply:* I - like most people - have fine tuned networks trained on ImageNet. But I wonder if that has some flaws as an approach.

Robert Dawes (robert.dawes@bbc.co.uk)
2024-05-21 13:07:20

*Thread Reply:* Thanks very much for the notes above

Zhongqi Miao (zhongqi.miao@berkeley.edu)
2024-05-21 13:12:38

*Thread Reply:* Exactly. Therefore those leaderboard is only a place for people to know what's the state of the art classification models are. And usually, as long as finetuned well, better models on the leaderboader can yield better results on animals. But the tricky part is "finetuned well". And like you said, imagenet pretrained models And wildlife imagery have domain discrepancies. Therefore out of the box, no imagenet models work on camera trap images. However, all the low level visual features are the same, they are all edges, textures, lines, And shapes. Imagenet is good because it covers most of these low level features. And finetuning is basically a way to transfer the combinations of these lowlevel feature to other domains like wildlife imagery. And this is also why tramlnsformers trained on huge pretraining data work even better because these data cover more lowlevel features and possible combinations

👍 Bernie Boscoe
Antonio Ferraz (antonio.a.ferraz@jpl.nasa.gov)
2024-05-21 20:12:04

Hi everyone, who is attending the World Biodiversity Forum? Lacey Hughey (Smithsonian), Talia Speaker (WildLabs and WWF) and myself (NASA JPL), we are thinking in organizing side events (an informal meeting and social in a bar), around the topic of animal movement science and applications. Let’s us know if you are interested or know someone that might be interested in joinning us.

❤️ Sara Beery, Caleb Robinson, Robin Sandfort, Kalindi Fonda, Talia Speaker
Kalindi Fonda (kalindi.fonda@gmail.com)
2024-05-22 06:23:43

*Thread Reply:* Oh wow this is interesting.

Would you recommend going to the event as a "private person"?

Robin Sandfort (sandfort@wildbiologie.org)
2024-05-22 07:33:13

*Thread Reply:* Which of the evenings are you looking at? I might be coming. Greetings from Austria, Robin

❤️ Talia Speaker
Talia Speaker (talia.speaker@wildlabs.net)
2024-05-22 12:27:21

*Thread Reply:* Yay Robin! We're thinking a social following the Apero event on the Monday evening and a meeting Wednesday around/after lunch, but still finalizing

✅ Robin Sandfort
Sam Lapp (sam.lapp@pitt.edu)
2024-05-24 18:35:47

Hi all, I’m looking for opinions on PyTorch vs Pytorch+Lightning

specifically, I’m debating whether the OpenSoundscape python library should move from internally using raw Pytorch to using Pytorch Lightning. My thinking is that many implementation details (such as correctly and optimally scaling training/inference across devices, implementing mixed precision or batch gradient accumulation, integration with many different logging GUIs, etc) have been implemented by the PyTorch experts so re-implementing may be worse than switching to Lightning. I’m curious to hear opinions/thoughts/votes or specific details to take into consideration.

⚡ Suzanne Stathatos
Suzanne Stathatos (suzanne.stathatos@gmail.com)
2024-05-24 18:45:36

*Thread Reply:* Lightning’s modularization is honestly really nice. I know other frameworks (i.e. lightly) include tutorials and documentation in both pytorch and pytorch+lightning, which is definitely more work from a documentation perspective, but is also really nice from a user perspective to not be forced to use one or the other.

I know @Markus Marks has thoughts on this too

👍 Sam Lapp
Josafat-Mattias Burmeister (josafat-mattias.burmeister@web.de)
2024-05-25 09:24:25

*Thread Reply:* I have worked with both plain PyTorch and PyTorch Lightning in different projects. PyTorch Lightning offers many great features such as automatic device handling, support for various metrics logging platforms, scaling of training across multiple devices, early stopping, checkpointing, etc. In my experience, this speeds up development, especially at the start of a project. It also helps to increase the maintainability of the code, as you have to write, test and maintain less of the code yourself. However, the learning curve for PyTorch Lightning is also steeper. For people who have little developer experience with deep learning, I would therefore recommend gaining experience with plain PyTorch first. For me, this helped to understand what happens at which point in a training or inference pipeline. Another disadvantage of PyTorch Lightning is that you have less control over the training and inference pipeline, at least if you are working with the standard Lightning Module. For projects with many non-standard steps in training or inference, plain PyTorch is the better choice in my opinion.

👍 Sam Lapp, Valentin Gabeff
Ben Weinstein (benweinstein2010@gmail.com)
2024-05-28 12:14:59

*Thread Reply:* We use lightning for DeepForest and its really nice to package together a single class, it allows us to store a config file m = main.deepforest(<you can put a config file here>) and then be able to do things like m.create_trainer() the only awkward piece is sometimes a pytorch lightning trainer wants the module specified explicitly, so it can look alittle silly. m.trainer.fit(model=m) but otherwise it really saves the user from needing to know things and would allow you to swap out models and architecture in the future without the user needing to care about it. If your goal is to make user experience easier, I would use lightning, if your goal is to give maximum flexibility for just your team, plain pytorch. Here is the example of our lightning subclass https://github.com/weecology/DeepForest/blob/7de55c1a941ab53dc06158cb84f8fd3ca5ca5b0a/deepforest/main.py#L22

👍 Sam Lapp
Sam Lapp (sam.lapp@pitt.edu)
2024-05-30 10:24:23

*Thread Reply:* thanks for the exmaples!

Sam Lapp (sam.lapp@pitt.edu)
2024-05-30 10:37:51

*Thread Reply:* @Ben Weinstein this example reminds me of something I’ve wondered with PyTorch Lightning - the API requires a separate trainer object from the LightningModule object, which led me to add self.trainer = Trainer(… to the subclass of LightningModule as you have in this codebase. But this leads to the strange syntax of self.trainer.fit(self, … . seems like that has worked out OK for you? I haven’t been able to figure out if that’s what other PL users are doing or if there are issues with it.

Ben Weinstein (benweinstein2010@gmail.com)
2024-05-30 12:22:14

*Thread Reply:* I think being clear in docs that this is the desired format is a reasonable trade for being able to have a single config file that goes in and the user gets to call self.create_trainer() or some kind of wrapper and not need to specify the many many arguments.

Ben Weinstein (benweinstein2010@gmail.com)
2024-05-30 12:22:29

*Thread Reply:* Of all the complaints we get, that has never been one of them.

👍 Sam Lapp
Ashley Kim (hugaskim@gmail.com)
2024-05-25 12:38:28

Hi everyone, my name is Ashley! I am a rising junior at UC Berkeley interested in tackling challenges within the intersections of biodiversity and AI. I was inspired towards this field by Google’s Wildlife Insights initiative and was so glad to have found the talk with Sara Beery on the tinyML YouTube channel as a result. Joining the Slack, I am so excited to see a wide variety of interesting topics sparking conversation but also feel a bit overwhelmed. As a beginner hoping to build a robust understanding, what subjects should I look at first?

👋 Sara Beery, Omiros Pantazis, Shir Bar, Mitchell Rogers, Jason Holmberg (Wild Me), Dan Morris, Ishan Nangia, Andy Viet Huynh
Filippo Varini (fppvrn@gmail.com)
2024-05-26 18:34:12

Hi all, I am trying to convince an audience of conservationists that is new and skeptical about AI that it can drastically enhance their efforts. Can anyone suggest me a paper or graph showing how models like the Megadetector boosted conservation efforts ?

Dan Morris (agentmorris@gmail.com)
2024-05-26 20:34:56

*Thread Reply:* This paper is not related to MegaDetector, but it's the most comprehensive assessment I've seen of the benefits of AI to a camera trap project (it evaluates the use of the eVorta system wrt time, money, and carbon):

Smith J, Wycherley A, Mulvaney J, Lennane N, Reynolds E, Monks CA, Evans T, Mooney T, Fancourt B. Man versus machine: cost and carbon emission savings of 4G-connected Artificial Intelligence technology for classifying species in camera trap images. Preprint. 2024.

A couple of papers related to MegaDetector have estimated results as a speedup fraction to processing time... YMMV, they are all asking very different questions:

Fennell M, Beirne C, Burton AC. Use of object detection in camera trap image identification: Assessing a method to rapidly and accurately classify human and animal detections for research and application in recreation ecology. Global Ecology and Conservation. 2022 Jun 1;35:e02104.

Henrich M, Burgueño M, Hoyer J, Haucke T, Steinhage V, Kühl HS, Heurich M. A semi-automated camera trap distance sampling approach for population density estimation. Remote Sensing in Ecology and Conservation. 2023.

Mitterwallner V, Peters A, Edelhoff H, Mathes G, Nguyen H, Peters W, Heurich M, Steinbauer MJ. Automated visitor and wildlife monitoring with camera traps and machine learning. Remote Sensing in Ecology and Conservation. 2023.

[The last one doesn't explicitly talk about time, but because it's a rare scenario where full automation is plausible (because it doesn't involve species classification), time is implicit.]

But the way I usually look at this is to set aside all the papers that are about the use of AI for camera traps... any paper about AI for conservation (including papers written by me, or you, or most folks on this Slack) is biased toward a positive result, even if only through the review process. After all, we're here because we're excited about the potential of AI in this space!

Instead the case I would make is just "here's a bunch of people using AI who have no reason to if it didn't save them time". For that argument, I can only give you the MegaDetector-centric view, but FWIW, this is the reason I keep a list of MegaDetector users who agree to be included (or publish their use of MegaDetector):

https://github.com/agentmorris/MegaDetector/blob/main/README.md#who-is-using-megadetector

I want a new potential user to be able to look at that list, ping someone they at least sort of know, and ask about their experiences. Almost everyone on that list just had a job to get done, and had no inherent motivation to use AI.

Along the same lines, on the list I maintain re: papers about AI and camera traps:

https://agentmorris.github.io/camera-trap-ml-survey/#papers-with-summaries

...I have a tag called "ecology paper". That's not related to the journal it was published in, rather that tag is meant to capture "this person just had a job to do related to ecology, and wasn't inherently interested in AI, they just used it as a tool to do their work". (I realize I have not made it very easy to search for that tag... note to self.) Those are IMO better arguments in favor of AI than quantitative claims made in a paper about the use of AI: if those authors didn't find that AI was useful, they just wouldn't have used it.

💚 Burak Ekim, Justin Kay
👍 Ed Miller
Justin Kay (justinkay92@gmail.com)
2024-05-27 03:05:03

*Thread Reply:* Avoiding the term “AI” and using something more specific like “automated image processing” could also help. “AI” comes with a lot of baggage these days 😅

👍 Lukas Picek, Ed Miller, Dan Morris, Carly Batist, Mitch Fennell, michele volpi
💯 Jon Van Oast
Filippo Varini (fppvrn@gmail.com)
2024-05-27 12:19:26

*Thread Reply:* Thank you!!

Romain Lefèvre (romain.adrien.lefevre@protonmail.com)
2024-05-27 11:28:50

👋 Hello everyone!

I'm Romain, a scientific researcher in the Behavioural Ecology Group at the University of Copenhagen, led by Elodie Briefer (https://www.behavioural-ecology-group.com/). My passion lies in leveraging machine learning to decode animals' emotions, and I'm thrilled to meet you and join this Slack channel! 😊

Beyond my research, I also specialize in web design, aiming to enhance online presence and professionalism. Recently, I had the pleasure of designing the AI & Bioacoustics 101 online workshop website, and I could not be happier to introduce it to you.

Understanding and protecting wildlife has always been a challenge. Traditional methods of monitoring animal behavior and communication can be time-consuming and invasive. This is where AI comes in—revolutionizing how we decode and interpret animal sounds, leading to significant advancements in conservation and welfare.

But how can we enhance animal welfare and conservation using artificial intelligence?

Join us at the Bioacoustics & AI 101 workshop on September 25th - 26th, where we’ll explore how AI can transform bioacoustics, making it more efficient and accessible. Whether you're a researcher, biologist, or simply curious about the intersection of technology and nature, this workshop offers invaluable insights.

Workshop Highlights: • Demystifying AI: Break down complex AI concepts into simple, accessible language. • Hands-on learning: Train your own deep learning network to classify sounds—no coding required! • Cutting-edge research: Explore the latest advancements in AI-powered bioacoustics. Featured Speakers: • Morten Goodwin: Professor of AI, University of Agder • Marie Roch: Head of the MAR Lab, San Diego State University • Anna Zamansky: Head of the Tech 4 Animals Lab, University of Haifa • Yossi Yovel: Head of the Bat Lab, Tel Aviv University • Léo Papet: Co-founder of Biophonia • Colleen Reichmuth: Head of the Pinniped Lab, University of California Santa Cruz Key Dates: • Abstract submission deadline: July 19th, 2024 • Acceptance notification: August 16th, 2024 • Registration closes: September 13th, 2024 ➡️ Don't miss this exciting opportunity to learn and connect with experts in the field. Register now for free and embark on an insightful journey into the future of AI in bioacoustics: https://aibioacoustics101.com/

Looking forward to virtually seeing you there! 🙂

Best,

Romain

Bioacoustics &amp; AI 101
Written by
Romain Lefèvre
Time to read
2 minutes
😎 Jon Van Oast, Chris Lange
Juan Sebastián Cañas (jscanass@gmail.com)
2024-05-27 12:55:08

Hi everyone! Today we opened an unlabeled dataset of PAM, check here for more information 🐸 https://x.com/jscanass/status/1795129045815812382 https://zenodo.org/records/11244814

Zenodo
🎉 Dan Morris, Maddie Cusimano, Marius Miron, Andy Viet Huynh
😎 Jon Van Oast, Chris Lange, Andy Viet Huynh
👏 Yuval Mendelson, Maddie Cusimano, Andy Viet Huynh
Sara Beery (sbeery@caltech.edu)
2024-05-29 17:04:43

Come join us for an AI for Conservation meetup in Seattle this June!! Co-located with CVPR 2024 and co-hosted by my students @Justin Kay and @Timm Haucke, we welcome anyone interested in the intersection of AI and the environment, ecology, biodiversity, etc!!

🌍 Oisin Mac Aodha, Gabriel Tseng, Subhransu Maji, Nino Migineishvili, Dan Morris, Omiros Pantazis, Ted Schmitt, Justin Kay, Risa Shinoda, Lukas Picek, Dante Wasmuht, Thijs van der Plas, Chris Lange, Hugo Magaldi, Nico Lang, Julia Chae, Vincent Lostanlen, Gustavo Perez, Shir Bar, Lasha Otarashvili, Madeleine Grunde-McLaughlin, Neha Hulkund, Benjamin Algreen Adler, Andy Viet Huynh, Anastasios Angelopoulos
🎉 Jon Van Oast, Joe Nangle, gvanhorn, Carly Batist, Mitchell Rogers, Omiros Pantazis, Michael Procko, Justin Kay, Risa Shinoda, Bernie Boscoe, Cara Appel, Lukas Picek, Malte Pedersen, Thijs van der Plas, Valentin Gabeff, Julia Chae, Vincent Lostanlen, Gustavo Perez, Shir Bar, Talia Speaker, Lasha Otarashvili, Jason Holmberg (Wild Me), Madeleine Grunde-McLaughlin, Neha Hulkund, Tarun, Christoph Praschl, Andy Viet Huynh, Rowan Converse, Anastasios Angelopoulos
👍 Luke Sheneman, Andy Viet Huynh, Anastasios Angelopoulos
🙌 Neha Hulkund, Andy Viet Huynh, Anastasios Angelopoulos, Malika Nisal Ratnayake
🙂 Erika Barthelmess (she/hers)
Dan Morris (agentmorris@gmail.com)
2024-05-29 18:07:17

*Thread Reply:* Being in Seattle after 7pm is getting close to my bedtime, but I'm so excited to see all of the AI4C folks that I'll make an exception to my "90-year old farmer" schedule.

😅 Lukas Picek, Sara Beery, Justin Kay, Mitch Fennell, Lauren Harrell, Timm Haucke, Neha Hulkund
Thijs van der Plas (vdplasthijs@gmail.com)
2024-05-30 05:58:48

*Thread Reply:* Fantastic, thank you for organising! 🙂

❤️ Sara Beery
Hugo Magaldi (magaldi.hugo@gmail.com)
2024-05-30 09:26:19

*Thread Reply:* Would have loved to join you and meet the community but paris-seattle makes for a long commute !

❤️ Sara Beery
Masato Hagiwara (hagisan@gmail.com)
2024-05-30 13:23:20

*Thread Reply:* Hi @Sara Beery as someone working on animal communication and based in Seattle, this sounds great! I’d love to join, but can I just show up or would have to register?

👍 Mitchell Rogers, Sara Beery
Sara Beery (sbeery@caltech.edu)
2024-05-31 09:02:33

*Thread Reply:* Just show up! It's very casual

Lauren Harrell (laurenaharrell@gmail.com)
2024-05-31 09:03:04

*Thread Reply:* I’m sorry I can’t be there (will be in Zurich)! But this sounds awesome!

Masato Hagiwara (hagisan@gmail.com)
2024-05-31 12:15:01

*Thread Reply:* @Sara Beery Great, looking forward to seeing everyone!

Eric Cunningham (ejcunningham@gmail.com)
2024-05-31 16:04:14

*Thread Reply:* Amazing, for organizing this! I’ll plan to join + looking forward to meeting you all 🙂

Matt Weldy (matthewjweldy@gmail.com)
2024-06-07 11:53:47

*Thread Reply:* Sounds like fun. I'll pop across the ferry.

Marius Miron (marius.miron@earthspecies.org)
2024-06-05 05:47:21

Dear AI for Conservation Community,

The submission deadline for VIHAR-2024 has been extended to June 30th, 2024.

VIHAR-2024 is the fourth international workshop on Vocal Interactivity in-and-between Humans, Animals and Robots, a satellite event of Interspeech 2024. It will take place in a hybrid format and will be hosted in Kos, Greece on 6th September 2024 and online on 9th September 2024. VIHAR-2024 aims to bring together researchers studying vocalization and speech-based interaction in-and-between humans, animals and robots from a variety of different fields. VIHAR-2024 will provide an opportunity to share and discuss theoretical insights, best practices, tools and methodologies, and to identify common principles underpinning vocal behavior in a multi-disciplinary environment. We invite original submissions of 5-page papers (with the 5th page reserved exclusively for acknowledgements and references) or 2-page extended abstracts in all areas of vocal interactivity. Accepted papers will be compiled in the VIHAR-2024 proceedings that will be published online. All papers should follow the Interspeech 2024 template. Suggested workshop topics may include, but are not limited to the following areas: • Self-supervised learning for vocal signals • Generative audio systems for vocal interactivity • Function of vocalizations: discovery of information embedded within signals, testing for functional reference, linking vocal signals to behavior • Physiological and morphological comparisons between vocal systems in animals and humans • Vocal imitation and social learning of vocal signals • Valence and emotion in vocal signals • Inter and intraspecies comparative analyses of vocalizations • Interspecific vocal interactivity between non-conspecifics. • Speech perception and production in human-human interactions and human-robot interactions • Comparative analysis of vocal signals in vocal interactivity. • Theory development of vocal interaction interfaces • Ethics of vocal interactivity Submission instructions can be found at the EasyChair submission page: https://easychair.org/conferences/?conf=vihar2024 Important dates (AoE): • Submission deadline: 30th June 2024
• Notification of acceptance: 21st July 2024
• Final versions for inclusion in proceedings: 30th July 2024
• Author registration closes: 21st August 2024
• Workshop: 6th and 9th September 2024
This event is sponsored by Earth Species Project (https://www.earthspecies.org/ ) and supported by the VIHAR steering committee (http://www.vihar.org/ ) and the International Speech Communication Association (http://www.isca-speech.org/).

Organizing committee: Marius Miron, Yossi Yovel, Sara Keen, Eliya Nachmani, Paola Peña, Björn Schuller, Olivier Pietquin

Jarrett Blair (jarrettblair@gmail.com)
2024-06-06 08:00:03

Hi everyone,

The third edition of the Computational Entomology Webinar will be running on June 20th at 1300 UTC. The overall theme of this edition of the webinar is processing liquid-preserved invertebrate specimens. Please see the flyer attached to this message for the speaker list, and you can find out more about the webinar (including registration) on our WILDLABS event page (https://tinyurl.com/CE-WebinarIII)

wildlabs.net
❤️ Talia Speaker, Sara Beery, Shir Bar, Joseph Dimos, Ishan Nangia
Indu (indup@princeton.edu)
2024-06-06 14:58:22

Hi, I’m Indu 👋 and I’m a graduate student conducting research on explainable AI. I’m sending some information for a paid user study about XAI and bird identification that may be of interest to members of this Slack! 🐦

Our research team (Indu Panigrahi, Sunnie Kim, Rohan Jinturkar, Olga Russakovsky, Amna Liaqat, Ruth Fong, and Parastoo Abtahi) is conducting research on understanding the Explainable AI (XAI) needs of technical and non-technical users, and how certain designs of explanation can satisfy or not satisfy these needs, for the task of bird identification.

We’re looking to interview users with a diverse range of experiences with computer vision and machine learning (from no to a lot of experience) and a variety of birding expertise (from no birding experience to experienced birder). If you are interested in participating in this ~1.5-hour study over Zoom, please fill out this short Google form (https://forms.gle/F7bmWkVG7JyT6XA27). Study participants will be compensated with a $25 Amazon gift card (which will be sent electronically).

If you have any questions, please feel free to DM or email me. Thank you! (I originally sent this in #help_needed and #cv4animals but realized that more people are on #general so apologies if you’re getting this message more than once!)

Indu (indup@princeton.edu)
2024-06-07 10:29:27

*Thread Reply:* As a note, we’re currently looking for participants with high ML expertise (e.g., have taken a course on ML and/or have experience working with a ML system, often use and study ML) and high birding expertise (e.g., have taken a course on birding and/or have experience in birding, often conduct bird-watching and study birding)

Devis Tuia (devis.tuia@epfl.ch)
2024-06-07 09:40:16

Hello everyone, I didn’t see an announcement for this on this slack (sorry if I missed it), so I take the liberty to advertise without the blessing of the organisers 😄 (@Mohamed Elhoseiny, @Sara Beery, etc): at ECCV 2024 in Milan there will be a workshop about Computer Vision for Ecology! I guess many people in this slack will be excited (I am!). Paper deadlines in July / August: https://cv4e.netlify.app

cv4e.netlify.app
🌍 Oisin Mac Aodha, Ishan Nangia, Nico Lang, Vincent Lostanlen, Mohamed Elhoseiny, Yseult Hb, Don Cosseboom, Sara Beery, Shir Bar, Robin Zbinden, Nicolas Arrieta Larraza, Fagner Cunha, Andrew Schulz, Mélisande Teng, Gustavo Perez, Valentin Gabeff, Julia Chae, Juan Sebastián Cañas
🙌 Justin Kay, Dan Morris, Nico Lang, Omiros Pantazis, Vincent Lostanlen, Don Cosseboom, Sara Beery, Robin Zbinden, Cameron Trotter, Gustavo Perez, Julia Chae, Juan Sebastián Cañas
❤️ Sara Si-Moussi, Vincent Lostanlen, Jon Van Oast, Don Cosseboom, Sara Beery, Robin Zbinden, Julia Chae, Juan Sebastián Cañas
🌏 Chris Lange, Julia Chae, Juan Sebastián Cañas
Oisin Mac Aodha (macaodha@caltech.edu)
2024-06-07 09:42:33

*Thread Reply:* Very cool!

Urs (urs.waldmann@uni-konstanz.de)
2024-06-11 06:59:04

*Thread Reply:* Will there be a page limit for submissions? I did not find any information on the website. Maybe I missed something. Thanks!

Devis Tuia (devis.tuia@epfl.ch)
2024-06-11 07:22:12

*Thread Reply:* @Mohamed Elhoseiny?

Benno Simmons (benno.simmons@gmail.com)
2024-06-11 06:10:02

Fully-funded PhD opportunity with me and Tatsuya Amano at University of Exeter! Come and do the first research into responsible AI for biodiversity monitoring, developing ways to ensure these AIs are safe, unbiased and accountable. Generous research and travel costs, plus the opportunity to spend time in Australia at University of Queensland. *Deadline June 28* https://www.findaphd.com/phds/project/towards-responsible-ai-systems-for-automated-biodiversity-monitoring-centre-for-ecology-and-conservation-quex-phd-studentship/?p172597 PLEASE REPOST/SHARE WIDELY!

👍 Burooj Ghani, Dylan Van Bramer (she/her), Omiros Pantazis, Remi Gosselin, Dan Morris, Don Cosseboom, Sara Beery, Connor Levenson
Nico Lang (nila@di.ku.dk)
2024-06-12 03:00:13

Hi, We are hosting the PhD course on “SSL4EO: Self-Supervised Learning for Earth Observation” from July 01-05, 2024 at the University of Copenhagen, Denmark. We would like to invite PhDs and other researchers interested in the topic of self-supervised learning and/or Earth observation to participate and engage with fellow researchers, and experts at SSL4EO.

Event Details: Title: SSL4EO: Self-Supervised Learning for Earth Observation Dates: July 01-05, 2024 Location: Øster Voldgade 10, 1350 Copenhagen, Denmark Credit points: 2.5 ECTS More details: https://ankitkariryaa.github.io/ssl4eo/

Registration: https://eventsignup.ku.dk/ssl4eo/signup (Early registration is recommended, space is limited.)

Agenda: Our PhD course will cover a range of topics related to self-supervised learning in the context of Earth observation. Our list of invited speakers includes:

Randall Balestriero, META AI
Marc Russwurm, Wageningen University
Konstantin Klemmer, Microsoft Research
Bruno Sánchez-Andrade Nuño, Clay
Jan Dirk Wegner, UZH/ETH Zürich
Xiaoxiang Zhu, Technical University of Munich

Contact Information: For any inquiries or additional information, feel free to reach out to us {ak, stefan.oehmcke, nila}@di.ku.dk.

We look forward to welcoming you to the first iteration of SSL4EO: Self-Supervised Learning for Earth Observation!

Best regards, Stefan Oehmcke, Nico Lang, and Ankit Kariryaa

ankitkariryaa.github.io
🙌 Thijs van der Plas, Oisin Mac Aodha, Yuru Jia
Paul Allin (allinpaul@gmail.com)
2024-06-12 05:10:27

Hi everyone, very excited about a new research project using ML and airborne small aperture radar to detect snares. Looking for a good MSc student, CVs need to be in by then end of July. Please feel free to share!

😎 Jason Holmberg (Wild Me)
Benjamin Algreen Adler (benjamin@algreenadler.com)
2024-06-13 09:00:00

Many of you may have already seen this, but the Bezos Earth Fund's $100M AI for Climate and Nature grand challenge is now accepting submissions, and biodiversity conservation is a focus area in the first round. Eligibility is limited to academic institutions and U.S. 501c3 nonprofits, but I imagine many of you have ideas and could partner with an organization to submit.

Feel free to shoot me a note if you have questions, or join the webinar on June 20th.

AI for Climate and Nature Grand Challenge
AI for Climate and Nature Grand Challenge
Written by
aifnac
Est. reading time
1 minute
🎉 Katie Breen, mimi
👍 Cathy Atkinson
Sara Beery (sbeery@caltech.edu)
2024-06-16 05:19:55

Come look at birds with us next Thursday morning before CVPR!!

https://x.com/sarameghanbeery/status/1802269672575783202

X (formerly Twitter)
🐦 Nicolas Arrieta Larraza, Justin Kay, Urs, Shir Bar, Mitchell Rogers, Andrew Schulz, Oisin Mac Aodha, Fagner Cunha, Jonathan Roberts, Malte Pedersen, Ben Seleb, Thijs van der Plas, Julia Chae, Brian Geuther, Mike Rowling
❤️ Lukas Picek, Joseph Dimos, Omiros Pantazis, Justin Kay, Ted Schmitt, Julia Chae, Talia Speaker
🎉 Jon Van Oast, Julia Chae
🦉 Angela Zhu
Jes Lefcourt (jeslefcourt@gmail.com)
2024-06-16 06:37:40

*Thread Reply:* For those who are driving, I suggest parking here: https://maps.app.goo.gl/WTjD63LQeL4RMfL29

google.com
❤️ Sara Beery, Ted Schmitt, Julia Chae
Dan Morris (agentmorris@gmail.com)
2024-06-18 12:29:37

*Thread Reply:* Make all of us who didn't make it this morning jealous... what awesome birds did you see?

Sara Beery (sbeery@caltech.edu)
2024-06-18 12:33:30

*Thread Reply:* On Thursday!! No birds this morning, alas

Dan Morris (agentmorris@gmail.com)
2024-06-18 14:30:49

*Thread Reply:* Ah, my bad. I mentally collapsed all of CVPR into "Monday" and "the rest of CVPR". I'll ask again for a bird report on Thursday.

👍 Sara Beery, Izzy Zhu
Sara Beery (sbeery@caltech.edu)
2024-06-19 13:21:51

*Thread Reply:* For those joining tomorrow, if you're coming over from the conference center with us we will meet at 7 in front of Arch (7th and Pike) and then take the light rail. Otherwise we will collectively meet at the UW Boathouse at 7:30!

👍:skin_tone_2: Cara Appel
👍 Nino Migineishvili, Julia Chae, Pia Bideau, Brian Geuther, Mathias Günther, Xiaojuan Liu, Andrew Schulz
🎉 Riley Knoedler
👍:skin_tone_3: Jess Tam
Mathias Günther (mathias.guenther@uni-konstanz.de)
2024-06-19 20:22:06

*Thread Reply:* Will you leave at 7 sharp? Or is there a little time to pick up some breakfast to go?

Ted Schmitt (teds@allenai.org)
2024-06-17 13:22:43

I plan to meet you there at 7:30.

🎉 Ben Weinstein, Sara Beery
Julia Chae (chaenayo@mit.edu)
2024-06-18 00:31:15

Introducing the first Computer Vision for Ecology (CV4E) Workshop at #ECCV2024 in Milan 🌳 If your work combines computer vision and ecology, submit a paper and join us! The workshop will cover applications across diverse ecological systems, featuring exciting speakers, discussions, and challenges (TBA). Thank you @Devis Tuia for promoting us earlier this month 🙂

Deadlines: July 15 (Proceedings) / Aug 15 (Non-Archival) More details: https://cv4e.netlify.app/submit/

Also please follow our twitter page for future updates! https://x.com/CV4E_ECCV/status/1802921207286862026

X (formerly Twitter)
🎉 Justin Kay, Catherine, Timm Haucke, Nina van Tiel, Alba Márquez-Rodríguez, Catherine Villeneuve, Dan Morris, David Russell, Sara Beery, Neha Hulkund, Andrew Schulz, Gustavo Perez, Mohamed Elhoseiny, Edward Amoah Idun
🙌 Catherine, Timm Haucke, Cameron Trotter, Georgia Atkinson, Levi Cai, Nicolas Arrieta Larraza, Catherine Villeneuve, Sara Beery, Neha Hulkund, Gustavo Perez, Mohamed Elhoseiny
Julia Chae (chaenayo@mit.edu)
2024-06-27 15:35:46

*Thread Reply:* To everyone interested in submitting, we have just updated the website with more submission details such as the paper formatting requirements! Please see https://cv4e.netlify.app/submit/

Also, the paper deadlines are still July 15th and Aug 15th for proceedings and non-archival respectively 🙂 Looking forward to everyone's works

CV4E
😎 Timm Haucke, Sara Beery, Subhransu Maji, Devis Tuia, Robin Zbinden, Malte Pedersen, Gustavo Perez
Anuj Gore (anujgore23@gmail.com)
2024-06-21 06:59:52

Hi guys, maybe some of you have probably seen this before but I came across this really interesting paper which demonstrates that African elephants address each other with individually specific name-like calls. Here’s the link if anyone is curious: https://fermatslibrary.com/s/african-elephants-address-one-another-with-individually-specific-name-like-calls#email-newsletter

Fermat's Library
🙂 Peter van Lunteren
charlotte (deshchang@gmail.com)
2024-06-24 19:56:49

Hi folks! For anyone attending NACCB, if you’re interested in NLP for conservation, come join our symposium tomorrow at 8am in the Great Hall North. We have a great set of speakers and are excited to have a panel discussion at the end. 👋

👍 Dan Morris, Tiziana Gelmi Candusso, Matt Ziegler
Dan Morris (agentmorris@gmail.com)
2024-06-24 20:47:09

*Thread Reply:* I'll be there!

💯 charlotte
❤️ charlotte
Tiziana Gelmi Candusso (tiziana.gelmi@gmail.com)
2024-06-25 02:07:21

*Thread Reply:* You have so many cool talks, I might have to jump in and out as I have some colleagues with overlapping talks but looking forward to catch as much as I can of your symposium!

❤️ charlotte
💯 charlotte
David Stein (davidstein94@googlemail.com)
2024-06-25 05:14:35

Hello everyone! I'm a PhD student at TU Dresden, Germany, working on bird species classification with data from the audio domain, in particular audiomoths.

I would like to draw your attention to the half-day online workshop "Machine Learning for Biodiversity Monitoring", which we're going to host on August 12th. Apart from audio and birds species, we are happy to have contributions covering other data modalities and different animal families.

I'm attaching a call for participation for further information. If you would like to participate or if you have any questions feel free to DM me or to send an e-mail to David.Stein1@tu-dresden.de.

Best wishes, David

🤩 Alba Márquez-Rodríguez, Lucy Dimitrova, Ilyass Moummad, Yanting Teng
👀 Valerie, Aude Vuilli
👍 Chris Lange, Vincent Lostanlen, Alexander Merdian-Tarko, Mohamed Elhoseiny
😎 Jon Van Oast
🙂 Gijs M. Gerrits
Jonah Fox (jonahfox@gmail.com)
2024-06-27 07:00:41

Hi - I'm doing a project for a client - it's very loose right now - but I wanted to get a demo of some kind of species identification (birds or flora or similar) using photographs. Can anyone recommend a good place to start - or a half decent open source model ? There's a few on Kaggle, but it's a bit hard to know if they are good/easy to get into.

Amee Assad (aa3628@columbia.edu)
2024-07-04 17:25:21

*Thread Reply:* Hi! Check out fine-grained classification problems https://paperswithcode.com/task/fine-grained-image-classification CUB and NABirds datasets are a place to begin your search

paperswithcode.com
❤️ Jonah Fox
Jonah Fox (jonahfox@gmail.com)
2024-07-05 08:31:39

*Thread Reply:* that's an amazing website ty so much !

🙂 Amee Assad
Patrick Beukema (patrickb@allenai.org)
2024-06-27 11:30:29

Hi all, we are working on a project that requires annotating ecosystems in remote sensing data to create maps and ultimately a segmentation style model to automatically classify pixels. It does not strike us as a simple annotation effort. Has anyone here completed that type of project before — specifically a distributed/many-expert annotation effort at pixel (or polygon) level granularity? And if so could you comment on:

1) what tools/annotation software you found most effective (third party or open-source), 2) if there are any existing guidelines/recommendations/suggestions for how to do this most effectively 3) any papers focused on the actual data/annotation/processing (besides just the modeling work)?

Jonathan Sauder (jonathan.sauder@epfl.ch)
2024-06-27 16:55:40

*Thread Reply:* Hi, we've been building a large semantic segmentation dataset (of coral reefs) with the help of multiple experts. We're using CVAT (https://github.com/cvat-ai/cvat) which we're hosting on one of our own machines. CVAT works great, and allows configuring in a way to have projects with users and assign them sets of images etc. We also use CVAT's 'Segment Anything' support (runs on the GPU on our machine), which helps a lot.

Website
<https://cvat.ai>
Stars
11740
🙏 Patrick Beukema
Patrick Beukema (patrickb@allenai.org)
2024-06-27 17:04:08

*Thread Reply:* Thank you!

Ishan Nangia (ishannangia.123@gmail.com)
2024-06-29 14:20:31

*Thread Reply:* We had used labelme for easily annotating degradation indicators in a degraded tropical forest. Super easy to use tool. Straightforward. Allowed for offline usage (one of our annotators had limited access to internet). But doesnt have a lot of the modern features that come with cloud-based or self hosted softwares.

https://github.com/labelmeai/labelme

For working with multiple annotators, we didnt have them labelling the same images. Also, every annotator was considered equal to the other in terms of their ability. However these two docs might be interesting reads:

https://docs.cleanlab.ai/stable/tutorials/multiannotator.html https://docs.aws.amazon.com/sagemaker/latest/dg/sms-annotation-consolidation.html

cleanlab
docs.aws.amazon.com
🙏 Patrick Beukema
Christoph Praschl (christoph.praschl@fh-hagenberg.at)
2024-06-28 07:16:58

Hi everyone!

I already messaged you some weeks ago concerning the forth edition of the workshop series on "Camera Traps, AI and Ecology", which will be held this coming September on the Hagenberg campus of the University of Applied Sciences Upper Austria (FH Oberösterreich). We are currently accepting submissions and greatly value your contributions!

I wanted to give you a small update: Considering various requests and, understanding that additional time may benefit the full development of potential papers, we have decided to extend the deadline for the paper submission. The new deadline is now set for the 19th of July, 2024.

The website https://camtrap2024.fh-ooe.at/ has been updated with the new deadline. Feel free to utilize this extra time, and we look forward to your active contribution to the workshop! We would appreciate if you could forward this information to anyone, who might be interested in the workshop too! Online registration is available from today on the homepage!

We are also happy to already be able to announce the preliminary program (https://camtrap2024.fh-ooe.at/program/) including keynotes by outstanding fellows such as Cliodhna Quigley (University of Vienna), Robin Sandfort (Capreolus e. U.) and Stefano Mintchev (ETH Zurich), as well as our social event at the Ars Electronica Festival (https://camtrap2024.fh-ooe.at/venue/#socialevent).

In case of any queries or further information, you can reach us at camtraps2024@fh-hagenberg.at.

With best regards, Christoph

On behalf of the rest of the organisers: David Schedl (FH Upper Austria) Paul Bodesheim (University of Jena) Tilo Burghardt (University of Bristol)

camtrap2024.fh-ooe.at
Camera traps, AI, and Ecology 2024
Camera traps, AI, and Ecology 2024
❤️ Sara Beery, Mélisande Teng, Sowbaranika, Edward Amoah Idun, Sofía Miñano, Otto Brookes, Chris Lange
🙂 Robin Sandfort
👍 Robin Sandfort
Keiller Nogueira (keillernogueira@gmail.com)
2024-07-01 14:42:35

Hi @channel


2nd Workshop on Machine Vision for Earth Observation and Environment Monitoring in conjunction with the British Machine Vision Conference (BMVC) 2024

Glasgow, Scotland, UK

https://mveo.github.io/


IMPORTANT DATES

Paper submission deadline: Friday, 16 August 2024 Notification of Acceptance: Monday, 9 September 2024 Camera-ready Paper Due: Monday, 16 September 2024 Workshop: Thursday, 28 November 2024


Following the success of the previous edition held in 2023, the Workshop on Machine Vision for Earth Observation and Environment Monitoring (MVEO) is back. MVEO aims to foster collaboration and idea exchange among the Computer Vision, Remote Sensing and Environmental Monitoring communities, both nationally and internationally, promoting interdisciplinary research, encouraging innovative computer vision approaches for automated interpretation of Earth observation and other correlated data, and enhancing knowledge within the vision community for this rapidly evolving and highly impactful area of research.


TOPICS include but are not limited to:

- Methods: Data-centric machine learning; remote sensing data + language processing (such as Large Language Models) models; open-set, open-world, and open long-tailed recognition; multi-resolution, multi-temporal, multi-sensor, multi-modal approaches; generative models (GANs, stable diffusion, etc); self-, weakly, semi-, and unsupervised approaches; human-in-the-loop and active learning; etc.

- Tasks: Classification; object detection; segmentation (universal, semantic, panoptic, and/or instance); data augmentation and improvement; deep fake; domain adaptation and concept drift; super-resolution; explainability and interpretability; multi and hyperspectral, optical and radar image processing; and so on.

- Applications: Disaster relief; urban planning; sustainable and intelligent agriculture; coast, sea, and marine monitoring; pollution monitoring and air/water quality analysis; circular economy; Cultural Heritage documentation and preservation; climate change; sustainable development goals; geoscience; phenological studies; and so on.

ORGANIZERS

Keiller Nogueira, University of Stirling, UK Jan Boehm, University College London, UK Ronny Hänsch, German Aerospace Center (DLR), Germany Chunbo Luo, University of Exeter, UK Diego Marcos, Junior Professor, Inria, Universite de Montpellier, France Paolo Russo, Assistant Professor, Sapienza University of Rome, Italy Fabiana Di Ciaccio, Assistant Professor, University of Florence, Italy Ahmed Emam, PhD researcher, University of Bonn, Germany

mveo.github.io
🙌 Aria Ma, Claydson Bezerra, megan perra, Wanjohi Christopher, An Yu, Diego Marcos, Edward Amoah Idun, Robin Zbinden, Nina van Tiel, Quentin Geissmann
👍 Steve Murphy, Jason Holmberg (Wild Me)
❤️ Alfredo Lozada Fuentes, Kalindi Fonda, Chris Lange
👀 Valerie
Patrick Beukema (patrickb@allenai.org)
2024-07-05 12:00:02

Hi all, During CVPR, Sara swung by AI2 and gave a lecture on a new benchmark dataset for natural world imagery called INQUIRE. Abstract: Natural world images collected by communities of enthusiast volunteers provide a vast and largely uncurated source of data. For instance, iNaturalist has over 180 million images tagged with species labels, already contributing immensely to research such as biodiversity monitoring and having been cited in over 4,000 scientific papers. Yet, these images are also known to contain a wealth of ""secondary data"" captured unintentionally or otherwise included in images and not properly reflected in image labels. Although this data contains crucial insights into interactions, animal social behavior, morphology, habitat, co-occurrence, and many more questions, the costly, time-consuming, or expert-dependent analysis needed to extract such information prevents breakthroughs. Advances in deep learning methods for language and computer vision have the potential to enable the efficient and automated processing techniques needed to unlock the ""hidden treasure"" in such datasets– being able to directly search large image collections for these concepts would enable richer analyses that span beyond species identification. We posted the talk here: https://www.youtube.com/watch?v=TnOrlfRgmv4. A heartfelt thank you to Sara and her team for their visit. It was truly a pleasure to finally meet many of you in person after a year of virtual interactions.

YouTube
} Allen Institute for AI (https://www.youtube.com/@allenai)
❤️ Sara Beery, Oisin Mac Aodha, Malte Pedersen, Shir Bar, Negar Sadrzadeh, Justin Kay, Ted Schmitt, Brian Geuther, Timm Haucke, Dan Morris, Omiros Pantazis, Jess Tam, charlotte, Mitchell Rogers, Julia Chae, Neha Hulkund, Yuval Mend, Jason Holmberg (Wild Me), Nico Lang, Chris Lange, Gracie Ermi, Michael Bunsen
Julia Chae (chaenayo@mit.edu)
2024-07-09 15:21:03

ECCV 2024 Workshop Extended Deadline + Updated Requirements! We are extending the archival deadline to July 22nd. We are also now accepting Full Length Papers (14 pages) for the archival track, in addition to Short-form Papers (7 pages).

Please see https://cv4e.netlify.app/submit/ for final details

CV4E
🙌 Justin Kay, Neha Hulkund, Jason Holmberg (Wild Me), Andy Viet Huynh, Gustavo Perez, Nina van Tiel, Robin Zbinden, Subhransu Maji, Sara Beery, Juan Sebastián Cañas, Thijs van der Plas, Mohamed Elhoseiny
🎉 Jon Van Oast, Andy Viet Huynh, Robin Zbinden
Ed Miller (ed@hypraptive.com)
2024-07-10 22:44:22

Hi All, I spent my 4-week sabbatical from Arm helping researchers in Ecuador accelerate their camera trap analysis using AI. We started with with simple to use application like EcoAssist and are working toward an Ampere-based Biodiversity Server. Read more here: https://www.linkedin.com/pulse/bringing-bearid-ecuador-sabbatical-story-ed-miller-tkh5c/?trackingId=t7Ylo0FcRt6YGRlOJYNQDg%3D%3D

linkedin.com
Written by
Ed Miller
Reading time
7 min read
😎 Jason Holmberg (Wild Me), Sara Beery
🐻 Jason Holmberg (Wild Me), Jess Tam, Joe Nangle, Chris Lange, Matthias Zuerl, Aakash Gupta, Peter van Lunteren, Alexander Merdian-Tarko
❤️ Avi Sundaresan, Bernie Boscoe, Arthur Caillau, Yseult Hb, Joe Nangle, Christoph Praschl, Jose Ruiz-Munoz, Christine Laney
Ben Weinstein (benweinstein2010@gmail.com)
2024-07-10 23:41:21

*Thread Reply:* @Ed Miller i've worked with a number of lodges in Ecuador, I forwarded your post to them, you may hear from Maquipucuna/Santa Lucia lodge where there are long standing camera trap programs and plenty of andean bears.

👍 Ed Miller
Kara Watts (kdwwatts@gmail.com)
2024-07-12 13:59:05

*Thread Reply:* @Ed Miller that is such an amazing project. You are truly living my dream. Do you have any advice on how to make the kind of connections that led to your involvement in the BearID project? I transitioned from a career in animal behavior research to focus on software development with ML and am looking for more project opportunities in animal science.

Ed Miller (ed@hypraptive.com)
2024-07-15 12:02:45

*Thread Reply:* @Kara Watts I made my initial connections on https://www.wildlabs.net/. You can dig around the various forums there to look for ideas and collaborators. You can also start working on something that interests you and reach out there for help!

wildlabs.net
🙌 Kara Watts
Kara Watts (kdwwatts@gmail.com)
2024-07-15 12:03:31

*Thread Reply:* @Ed Miller Thanks for the advice!

👍 Ed Miller
Jennifer Turliuk (jenn.turliuk@gmail.com)
2024-07-11 08:50:54

Hi folks! I am involved with running the climate and energy hackathon this fall at MIT. It brings hundreds of students from across the nation together to work on real climate challenges with real companies (last year: Google, Crusoe, Schneider Electric, Foothill Ventures and more). If you're interested in having MIT students work on a problem important to your company / the climate, send me a DM or email (jturliuk@mit.edu). We're looking for potential challenge partners/sponsors for this year's hackathon. Here is last year's website for more info: https://www.mitenergyhack.org/

❤️ Christoph Praschl, Wanjohi Christopher, Sara Beery, Abhi Ravivarma
🙌 Joe Nangle, Wanjohi Christopher, Sara Beery
Felipe Montealegre-Mora (felimomo@berkeley.edu)
2024-07-12 18:59:54

Hey folks! I'm a beginning postdoc at the Data Science for the Environment center at Berkeley, and I've been trying to get the 'lay of the land' in terms of open source packages for camtrap image classification.

Do you know of any open source GUIs for classifying camtrap images?

I'm interested in both computer-vision-powered backends as well as just GUIs that make the manual classification task easier. I've found EcoAssist (thanks @Peter van Lunteren for the cool piece of software!) but not all that much more - are there any other packages that could be helpful in this?

👋 Sara Beery, Bernie Boscoe, Andy Viet Huynh, Timm Haucke, Chris Lange, Enis Berk Çoban
👋:skin_tone_3: Jess Tam
👍 Peter van Lunteren, Maddie Cusimano
Sara Beery (sbeery@caltech.edu)
2024-07-12 19:06:52

*Thread Reply:* @Dan Morris has a pretty nice list somewhere I'm sure 🙂

🙌 Felipe Montealegre-Mora
🎉 Jon Van Oast
Dan Morris (agentmorris@gmail.com)
2024-07-13 09:51:02

*Thread Reply:* As always, I live for making lists, or in this case, even better, a list of links into another list...

Within the list I maintain re: tools related to AI and camera traps:

https://agentmorris.github.io/camera-trap-ml-survey/

...there are a few relevant sections. The first is the list of image review systems that use AI in some way, which in 2024 is almost all of them:

https://agentmorris.github.io/camera-trap-ml-survey/#camera-trap-systems-using-ml

If a tool is open-source, there should be a link to the source in the description of that tool.

There is a second section that lists no-AI-at-all tools, but as I look at this section now, I think "no AI at all" is almost synonymous with "deprecated". I.e., even the tools that have been updated in the last few years are no longer in use as far as I know, at least I haven't heard of anyone using them recently, with the possible exception of Camera Base (which hasn't been updated in a while AFAIK, but I still hear about folks using it):

https://agentmorris.github.io/camera-trap-ml-survey/#manual-labeling-tools-people-use-for-camera-traps

Last but not least, there's a section for tools that aren't specific to camera traps that people sometimes use for reviewing camera trap images; among these, Exif Pro is the most widely used AFAIK and the only one that's open-source:

https://agentmorris.github.io/camera-trap-ml-survey/#non-camera-trap-specific-labeling-tools-that-people-use-for-camera-trap-data

I have no real data to back this up, but my sense is that among desktop tools, Timelapse has the largest "market share" on Windows, and Exif Pro has the largest "market share" on MacOS, although Windows is still far more widely used than MacOS for reviewing camera trap images.

https://agentmorris.github.io/camera-trap-ml-survey/#manual-labeling-tools-people-use-for-camera-traps

💚 Felipe Montealegre-Mora, Jason Holmberg (Wild Me), Tiziana Gelmi Candusso, Matthias Zuerl, Emilio Luz-Ricca
Sara Beery (sbeery@caltech.edu)
2024-07-13 10:25:23

*Thread Reply:* See??? @Dan Morris always coming in clutch with the lists 🤩

🙌 Felipe Montealegre-Mora, Jason Holmberg (Wild Me), Tiziana Gelmi Candusso, Robin Sandfort, Emilio Luz-Ricca
Felipe Montealegre-Mora (felimomo@berkeley.edu)
2024-07-13 12:50:06

*Thread Reply:* thanks so much you both, this was so helpful!!

Louis Moreau (luis.omoreau@gmail.com)
2024-07-16 11:05:18

Hi! I'm not sure if this has been shared here already. Here is a nice dataset of aerial imagery combined with height and density data for forests https://arxiv.org/abs/2407.09392v1?utm_source=tldrai

arXiv.org
🌴 Chris Lange
Nicholas (nichseemail@gmail.com)
2024-07-16 15:23:05

Thank you for sharing Louis. This is helpful

Sara Olsson (sara@edgeimpulse.com)
2024-07-17 10:41:00

Hi all, I've published a blog post on rapidly labeling camera trap data using ChatGPT for species identification and a simple object detection model to get the bounding boxes. With this setup, I could run through a set of camera trap recordings and automatically obtain 520 labeled images with detected animals, with only some minutes of manual work by controlling the output, and batch relabel samples of the same species but got different spellings, e.g. "Alaska moose" and "Alaskan moose".

While presented in Edge Impulse, this approach can be replicated outside the platform using Python scripts for example. I’d love for you to give it a read and share your thoughts on its potential usefulness in real-world applications.

video: https://www.youtube.com/watch?v=Ek1MmZIvtE&t=1s|https://www.youtube.com/watch?v=Ek1MmZIvtE&t=1s and blog post: https://www.edgeimpulse.com/blog/adaptive-camera-trap-gpt-4o/

YouTube
} Edge Impulse (https://www.youtube.com/@EdgeImpulse)
Edge Impulse
Written by
Sara Olsson
Filed under
edge ai
🙌 Alex Bucknall, Aude Vuilli, Enis Berk Çoban, Jose Ruiz-Munoz, Elizabeth Campolongo, Carly Batist, Jason Holmberg (Wild Me), Jenna Kline, Talia Speaker, Dante Wasmuht, Roberta Hunt, Sinan Robillard, Louis Moreau, Alexander Merdian-Tarko
Louis Moreau (luis.omoreau@gmail.com)
2024-07-24 05:26:12

*Thread Reply:* Great work @Sara Olsson!

Kalindi Fonda (kalindi.fonda@gmail.com)
2024-07-17 20:44:12

Hello! Did you also get the message that Slack will start deleting the old content? (edit: I don't think this question is relevant for this Slack space, as it's currently active, but the next question on better availability of old data/conversations is still relevant).

Is there a way for us to protect some of the conversations from here? Is there any option to upgrade, figure out a different way or are we ok for this to be a more on the go conversation platform (the 90 day cutoff is unideal already)? 🌱 Thank you for being a beautiful community.

Jon Van Oast (jon@wildme.org)
2024-07-17 20:53:23

*Thread Reply:* i see this message on the <#CMAFLU078|animal_re-id> channel, fwiw. doesnt seem to say "delete" but older content is unavailable.

Jon Van Oast (jon@wildme.org)
2024-07-17 20:53:57

*Thread Reply:* it links to this page which does seem to imply there is a paid plan which solves the problem.

Slack Help Center
Jon Van Oast (jon@wildme.org)
2024-07-17 20:55:32

*Thread Reply:* ( or we could move to something open source like matrix. 🙂 )

Kalindi Fonda (kalindi.fonda@gmail.com)
2024-07-17 21:22:37

*Thread Reply:* Actually on second review of my Slack deletion notice emails it might be only for Slack channels that haven't had any activity recently. However, it's still a bit unideal that we can't tap into all the old content of this channel.

Do people encounter the issue where they run into messages they can't see often? What are the usecases where this happens?

Steve Haddock (haddock@mbari.org)
2024-07-18 16:13:02

*Thread Reply:* @Kalindi Fonda Free slack teams already have a limit where you can't access messages over 90 days old. However Slack has been hanging on to those messages behind the scenes, probably to incentivize you paying to unlock them and access your full history. This new policy is that after August 26, they will delete the archived messages that are older than a year, so even if you switched to a paid plan, you couldn't access your older team history. There was a discussion of changing platforms for this slack, but ... it took place more than 90 days ago..! ;^) (Hope this explanation was not too mansplainy — their change from keeping 10,000 messages to only 90 days really impacted a lot of science communities, so I've been following the topic.) https://www.yahoo.com/tech/slack-delete-chats-files-free-105338864.html

Yahoo Tech
Kalindi Fonda (kalindi.fonda@gmail.com)
2024-07-19 04:38:24

*Thread Reply:* Thanks @Steve Haddock, yes I've been part of quite a few communities when the previous switch (to 90 messages) happened, so I remember the impact 😢

Kalindi Fonda (kalindi.fonda@gmail.com)
2024-07-19 04:40:22

*Thread Reply:* So are all slack communities affected? (I got a couple emails, I guess it's for slack channels where I am the owner/admin). There is the option to export the conversations, is this something the admins would be interested?

Kalindi Fonda (kalindi.fonda@gmail.com)
2024-07-17 20:47:07

Also would anyone be up for another online meetup? We did one about about a "year" ago on and it was lovely. Next week?

🎉 Jon Van Oast, Levi Cai, Enis Berk Çoban, Sinan Robillard, Sandra Gómez Gálvez
Christoph Praschl (christoph.praschl@fh-hagenberg.at)
2024-07-18 02:17:44

Hi everyone!

This is a friendly reminder that the paper submission deadline for the upcoming International Workshop on Camera Traps, AI, and Ecology is quickly approaching. Submissions are due by July 19, 2024 (Anywhere on Earth).

Workshop Details: • Date: September 05. - 06., 2024 • Location: Hagenberg Campus, University of Applied Sciences Upper Austria (FH Oberösterreich) and Online • Register on: https://camtrap2024.fh-ooe.at

Keynote Speakers: • Cliodhna Quigley (University of Vienna) • Robin Sandfort (Capreolus) • Stefano Mintchev (ETH Zurich) • Claudia Probst and Georg Schneider (University of Applied Sciences Upper Austria) Tutorial Sessions by: • Swarovski Optics • SmartMultiCopters Don't miss this opportunity to join esteemed researchers and industry leaders to explore the latest advancements at the intersection of camera traps, AI, and ecology.

Looking forward to your participation!

Kind regards,

David Schedl, Christoph Praschl, Paul Bodesheim, and Tilo Burghardt (the 2024 Organization Team)

camtrap2024.fh-ooe.at
👍 Oisin Mac Aodha, Sara Beery, Peter van Lunteren
Dan Stowell (dan.stowell@naturalis.nl)
2024-07-22 03:32:55

The Bioacoustics "Stack Exchange" (Q&A site) is looking for more moderators! Would you like to help maintain a pleasant Q&A community? (Time commitment: tiny.) https://bioacoustics.meta.stackexchange.com/questions/213/announcing-a-pro-tempore-election-for-2024 (By the way: Stack Exchange is a site that doesn't delete messages after 90 days 😉 )

😓 Sara Beery, Timm Haucke
😁 Vincent Lostanlen
🚀 Leonardo Viotti, Sam Lapp
Sam Lapp (sam.lapp@pitt.edu)
2024-07-22 16:47:44

*Thread Reply:* for those of us unfamiliar with the “moderator” role, is there a description somewhere?

👍 Abhi Ravivarma
Brian Geuther (brian.geuther@jax.org)
2024-07-22 12:16:32

The group I'm in will be hosting their 3rd short course on using machine learning for behavior quantification: https://www.jax.org/education-and-learning/education-calendar/2024/10-October/shor[…]-on-the-application-of-machine-learning-for-automated-quantific While our typical audience is more geared towards neuroscientists and geneticists interested in learning fundamentals of the adopting machine learning tools for their in-laboratory experiments, there may be some interest in this conservation community. While this years schedule is still being finalized, last years talk titles and speakers are present to provide a gist of what type of content will be presented.

👍 Sara Beery, Daniel Grzenda, Gözde Cilingir, Elizabeth Campolongo
👋 Daniel Grzenda
Peter van Lunteren (contact@pvanlunteren.com)
2024-07-24 07:44:43

🚨 New LILA BC dataset alert!

Thanks to the guys at Desert Lion Conservation, we got a new dataset consisting of 65k images and 200 videos of Namibian fauna. Labels are provided for 46 categories, primarily at the species level. The labels are mapped into the shared taxonomy.

@Dan Morris thanks for your help!

More info: https://lila.science/datasets/desert-lion-conservation-camera-traps/ MD results: https://lila.science/megadetector-results-for-camera-trap-datasets/

LILA BC
Est. reading time
2 minutes
LILA BC
Written by
lilawp
Est. reading time
2 minutes
LILA BC
Est. reading time
4 minutes
👍 Timm Haucke, Dan Morris, Dante Wasmuht, Shir Bar, Viktor Domazetoski, Nathan Fox, Robin Zbinden, Valentin Gabeff
🙌 Elizabeth Campolongo, Alexander Merdian-Tarko
😎 Jon Van Oast
👍:skin_tone_2: Cara Appel
👏 Victor Anton
Amrita Gupta (agupta375@gatech.edu)
2024-07-25 19:58:48

Cross-posting here, ESA anyone? https://aiforconservation.slack.com/archives/CM1JPL18R/p1721939084318559

} Amrita Gupta (https://aiforconservation.slack.com/team/UMDLQ8P24)
🙋‍♂️ Joe Nangle, Sara Beery, Nathan Fox, Tarun
👋 charlotte, Leonardo Viotti, Sam Lapp, Sara Beery
Sam Lapp (sam.lapp@pitt.edu)
2024-07-27 16:51:46

*Thread Reply:* is there a conservation tech meetup planned?

👀 Elly Knight, Sara Beery, Tessa Rhinehart
Sara Beery (sbeery@caltech.edu)
2024-07-28 18:08:00

*Thread Reply:* There should be!! But I've been slammed and haven't planned anything. How about meeting for beers Monday evening at 7ish somewhere near the convention center?

Sara Beery (sbeery@caltech.edu)
2024-07-28 18:12:17

*Thread Reply:* Ok, lets do it. ESA Conservation Tech Meetup at 7pm Monday August 5th, at Altar Society Brewing

https://maps.app.goo.gl/QwBKQPofRKQJbGLj7

🙌 Sam Lapp, Amrita Gupta, Tessa Rhinehart, charlotte, Nathan Fox
😎 Jon Van Oast, Tessa Rhinehart, charlotte
:dad_parrot: charlotte
🍻 Alan Stenhouse
Sara Beery (sbeery@caltech.edu)
2024-07-28 18:12:38

*Thread Reply:* Help spread the word?

🫡 Amrita Gupta
Sara Beery (sbeery@caltech.edu)
2024-07-28 18:15:29

*Thread Reply:* https://x.com/sarameghanbeery/status/1817685293249740829

X (formerly Twitter)
Sara Beery (sbeery@caltech.edu)
2024-07-28 18:28:04

*Thread Reply:* https://wildlabs.net/event/ai-conservation-meetup-esa-2024

wildlabs.net
❤️ Talia Speaker
Sam Lapp (sam.lapp@pitt.edu)
2024-08-04 12:07:00

*Thread Reply:* Hope to see many of you tomorrow!

Reposting details:

ESA Conservation Tech Meetup at 7pm Monday August 5th, at Altar Society Brewing

https://maps.app.goo.gl/QwBKQPofRKQJbGLj7

google.com
👋 Joe Nangle, Matt Weldy, Sara Beery, Tessa Rhinehart, charlotte
Sara Beery (sbeery@caltech.edu)
2024-08-06 10:48:42

*Thread Reply:* It was great to see everyone last night!!!

🙌 Sam Lapp
Nathan Fox (foxnat@umich.edu)
2024-08-06 12:00:38

*Thread Reply:* Unfortunately, due to some nightmare travel disruptions I wasn't able to get here in time for last nights social, if anyone wants to connect over a coffee or beer over the next few days, feel free to message me!

😞 Sara Beery
Alexander Merdian-Tarko (alexander.merdian-tarko@posteo.de)
2024-08-01 08:31:03

Hi everybody 👋

I've been around here on the AI for Conservation Slack for a while and have been following what's going on. It's really cool to see what's happening in the space and the exciting things people are working on. I have the growing wish to become a part of this and contribute. So I feel now could be a good time to introduce myself.

I'm currently a Data Scientist at UNICEF Germany but would like to transition into a data role in conservation. This space is so important and growing but I feel that data jobs are still somewhat scarce or hard to get - especially for career switchers. Maybe someone here can help me out or point me to a specific opportunity/organization/resource. The people from this community I was already in touch with were so friendly and helpful!

Here's what I'm looking for • technical role where I can use my Data Science, Machine Learning and Remote Sensing skills to help nature and people (I'm quite open domain-wise or regarding the specific context) • team with nice people to learn from • full or part-time, contract opportunities for a limited amount of time could also be interesting • hybrid or remote in Western/Northern Europe Here's what I can offer: • 5+ years experience in Data Science and Machine Learning (e. g. Python, R, SQL) and 2+ years experience in GIS and Remote Sensing (e. g. Google Earth Engine) • experience with land cover mapping in KAZA in collaboration with WWF Germany's Space+Science team • experience with movement ecology in collaboration with Max Planck Institute of Animal Behavior (MoveApps) and Okavango Research Insitute • I'm highly motivated to contribute to relevant societal and environmental issues, such as conservation and climate, and curious about learning new tools and diving into novel domains I'd be glad to hear about potential opportunities and general feedback. Feel free to connect on LinkedIn or check out my personal website.

All the best and greetings from Cologne 🇩🇪 Alex

🙌 Peter van Lunteren, Malte Pedersen, Christoph Praschl, Timm Haucke, Sara Beery, Carly Batist, Nanticha Ocharoenchai (Lyn), Amanda Bullington, charlotte
🙌:skin_tone_3: Alan Stenhouse
Sara Beery (sbeery@caltech.edu)
2024-08-01 10:57:31

Interested in a PhD or Masters at the intersection of AI and Ecology? This research area is growing rapidly, and it can be hard to figure out which research groups where are doing what! Come join our infosession to hear from PIs worldwide about their research and goals!!

(and if you're a PI and want to join, ping me!)

🌍 Oisin Mac Aodha, Lukas Picek, Brian Geuther, Dan Morris, Sonny Burniston, Nico Lang, Burooj Ghani, Dylan Van Bramer (she/her), Shir Bar, Meredith Palmer, Alexander Merdian-Tarko, David Russell, Catherine Villeneuve, Omiros Pantazis, Yuerou Tang, Andrew Schulz, Shravan Ambudkar, Robin Zbinden, Ana Maria Quintero
❤️ Jon Van Oast, Filip Dorm, Braden Charles DeMattei, Dylan Van Bramer (she/her), Catherine Villeneuve, Danilo Ortelli, Tessa Rhinehart, Shravan Ambudkar, Vanesa Reyes, Nathan Fox, Asa DeHaan, Taiki Sakai - NOAA Affiliate, Nora Gourmelon, Loyani Loyani, Talia Speaker, Tuan-Anh VU, Ana Maria Quintero, Remi Gosselin, Emilio Luz-Ricca
Brian Geuther (brian.geuther@jax.org)
2024-08-01 11:04:52

*Thread Reply:* Do you mind if we cross-post this elsewhere?

Sara Beery (sbeery@caltech.edu)
2024-08-01 11:06:36

*Thread Reply:* Please please do!!!

✅ Brian Geuther
Sara Beery (sbeery@caltech.edu)
2024-08-01 11:06:41

*Thread Reply:* Spread the word widely

👀 Brittany Aguilar
Autumn Nguyen (ngoc54n@mtholyoke.edu)
2024-08-15 09:14:14

*Thread Reply:* @Sara Beery Do we have a list of PIs, as well as their research groups and labs and institutions, who will be joining the session? I hope to do some research into the work that those PIs and groups do before the meeting, so that I can make the most of our time together during the session!

Sara Beery (sbeery@caltech.edu)
2024-08-15 10:24:40

*Thread Reply:* Good idea if we do this again! I will share the videos on youtube after, so that will be a more permanent resource, but because there has been some flux about which faculty will join live vs record videos for later I won't be able to share a list ahead of time

❤️ Autumn Nguyen
Autumn Nguyen (ngoc54n@mtholyoke.edu)
2024-08-15 10:58:22

*Thread Reply:* got it, thanks Prof Beery!

Kalindi Fonda (kalindi.fonda@gmail.com)
2024-08-19 07:09:22

*Thread Reply:* Oh no I missed this! Has it been recorded?

Sara Beery (sbeery@caltech.edu)
2024-08-20 07:39:25

*Thread Reply:* Yes, recordings should all be up hopefully this week

Sara Beery (sbeery@caltech.edu)
2024-08-02 16:44:27

Submit your abstract to an AGU2024 session titled "Scalable Biodiversity Assessment with Geospatial Foundation Models and Ecosystem Modeling."

We invite biodiversity assessment contributions from learning strategies to computational demands, from pre-training dataset construction to downstream applications, from benchmarks to evaluation metrics.

Join us to discuss the emerging possibilities and limitations of Geospatial Foundation Models for scalable biodiversity assessment.


Session Abstract

Satellite-based ecosystem monitoring of forests provides a continuous and consistent assessment of afforestation, reforestation, and land use conversions on a global scale. However, to estimate a spatial distribution of different species and ecosystem biodiversity, hyperlocal ecosystem observations need to be fused with remote sensing imagery. As such, biodiversity assessments require a strong collaboration between domain experts, machine learning researchers, software engineers and local communities to verify and calibrate the increasingly AI-based prediction models. This includes multi-modal geospatial foundation models which offer a method to combine various modalities (eDNA, text, audio, and image) and overcome the training data sparsity required to generalize models across distinct geographic areas. In this session, we invite presentations that discuss the challenges of collecting and fusing relevant datasets, training, and validating AI models to recognize key parameters, and scaling solutions to restore ecosystem biodiversity on regional and global scales.

View Session Details: https://lnkd.in/d8cMtdni The session viewer and abstract submission system is open here: https://lnkd.in/dS6QMRna. The AGU24 abstract submission deadline is Wednesday, 7 August at 11:59 PM EDT.

👀 Alan Stenhouse, Sonny Burniston
Eelke (eelke@aeria.ai)
2024-08-04 09:27:17

eerlijk

➕ Nate Harada, Santiago Ruiz Guzman
🙌 Nate Harada
💯 Nate Harada
Sara Beery (sbeery@caltech.edu)
2024-08-06 20:22:52

I have some availability at ESA tomorrow morning if anyone wants to meet and chat! DM me :)

😎 Jason Holmberg (Wild Me)
Alex Rood (alex.rood@wildlabs.net)
2024-08-07 13:22:00

Hi everyone! My name is Alex, I'm on the community team at WILDLABS, the global conservation technology community. It's nice to meet you all!

This week, WILDLABS is having our 9th annual #Tech4Wildlife Photo Challenge, where the conservation tech community shares photos and videos of how they're using technology for wildlife. It's a great way to celebrate the sector, connect with peers, and see what everyone is working on. We'd love to see some submissions from the AI community 😄

To participate, just share photos and videos of your AI conservation work with the hashtag #Tech4Wildlife and tagging us @WILDLABSNET on X, LinkedIn, and/or Instagram. I hope to see some of your submissions!

🙌 Josiah Hester, Ed Miller, Dan Morris, Jason Holmberg (Wild Me), Carly Batist, Shawn Johnson, Aude Vuilli, Alexander Merdian-Tarko, Talia Speaker
😎 Jon Van Oast, Sara Beery, Jason Holmberg (Wild Me)
🐾 Alan Stenhouse
Ben Weinstein (benweinstein2010@gmail.com)
2024-08-08 13:18:41

The Joint Statistic Meeting is here in Portland and @Toryn Schafer and I just had a nice conversation on how to get more integration between the statistics community and AI4Ecology. @Ben Augustine and others jump in. Just starting a thread here in case others want to be involved (@Eric Orenstein). Areas of intersection include 1) multi-observer processes that use both human and automated observers, 2) finding creative solutions to move toward hierarchical models that capture uncertainty, 3) Going beyond cross-validation and boot strapping as a way of assessing prediction confidence intervals. 4) Merging automated detections with existing time-series of human observations. If there are others in the community interested in solving these kinds of challenges, see this funding opportunity. https://new.nsf.gov/funding/opportunities/computational-data-enabled-science-engineering. I think many of us would be happy to support proposals that bring greater quantitative rigor into using the outputs of these models. I'd like to start a quick literature thread here to bring community knowledge together. https://www.biorxiv.org/content/10.1101/2023.02.20.529272v1.abstract, https://www.mdpi.com/2504-446X/8/2/54, https://zslpublications.onlinelibrary.wiley.com/doi/full/10.1002/rse2.356, if others can drop papers in here they have authored, or know of.

NSF - National Science Foundation
MDPI
❤️ Timm Haucke, Esther Rolf, Toryn Schafer, Shir Bar, Sara Beery, Jason Holmberg (Wild Me), Justin Kay, Casey Youngflesh, Elizabeth Campolongo, Subhransu Maji, Josh Hewitt, Gustavo Perez, Emilio Luz-Ricca, Alan Stenhouse
Toryn Schafer (tschafer@tamu.edu)
2024-08-08 13:48:22

*Thread Reply:* It may be outdated, but I believe @Ben Augustine had done a lit search related to thrust 2 for a Powell Center working group 4 years ago

Justin Kitzes (justin.kitzes@pitt.edu)
2024-08-08 17:25:01

*Thread Reply:* Our group is very interested in this area, would be great to chat about it - particularly area 1 and adding variations of marked abundance models with uncertain marks

➕ Sara Beery, Subhransu Maji
👍 Justin Kay, Toryn Schafer
Sara Beery (sbeery@caltech.edu)
2024-08-08 18:41:52

*Thread Reply:* My group is interested as well, particularly @Timm Haucke and @Justin Kay

➕ Justin Kay, Timm Haucke, Subhransu Maji
👍 Toryn Schafer
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2024-08-08 19:32:02

*Thread Reply:* Wild Me is definitely interested in this. I will bring cookies.

👍 Justin Kay
🍪 Joe Nangle, Alan Stenhouse
Casey Youngflesh (caseyyoungflesh@gmail.com)
2024-08-09 11:28:45

*Thread Reply:* This is great. I’d be very interested in chatting more about uncertainty, data fusion, and hierarchical models

❤️ Toryn Schafer
👍 Justin Kay
Dan Sheldon (sheldon@cs.umass.edu)
2024-08-09 11:53:36

*Thread Reply:* Hi folks, we’re also interested in this topic at UMass. We have a couple related papers (1, 2), and would be glad to chat with folks. @Gustavo Perez @Subhransu Maji @gvanhorn

arXiv.org
arXiv.org
👍 Justin Kay, Subhransu Maji, Gustavo Perez
🙌 Ben Weinstein, Subhransu Maji, Gustavo Perez
Ben Weinstein (benweinstein2010@gmail.com)
2024-08-09 12:49:47

*Thread Reply:* @Dan Sheldon @Gustavo Perez, we have airborne modeling work for BOEM (https://www.fisheries.noaa.gov/inport/item/67243), i'd be interested in trying a lightweight versus of this for their biodiversity surveys. I will DM you once I read closely.

fisheries.noaa.gov
👍 Dan Sheldon, Gustavo Perez
Elly Knight (ecknight@ualberta.ca)
2024-08-09 16:33:58

*Thread Reply:* Also interested! I spend a lot of time thinking about detection probability on the acoustic side of things, esp. with respect to human & AI integration. Would love to be kept in the loop (although we’re in Canada, so not eligible for NSF)

Nathan Jacobs (jacobsn@wustl.edu)
2024-08-12 10:06:13

*Thread Reply:* My group is interested. We've put out a few of papers that could potentially fit: • BirdSAT: Cross-View Contrastive Masked Autoencoders for Bird Species Classification and Mapping (WACV 2024): https://arxiv.org/pdf/2310.19168 • LD-SDM: Language-Driven Hierarchical Species Distribution Modeling (still a preprint): https://arxiv.org/pdf/2312.08334 • PSM: Learning Probabilistic Embeddings for Multi-scale Zero-shot Soundscape Mapping (will be published at ACM MM 2024): https://openreview.net/pdf?id=qnW0LQXY5L

Nathan Fox (foxnat@umich.edu)
2024-08-12 11:05:05

*Thread Reply:* Would also be interested in being involved! Very relevant to my current research.

Toryn Schafer (tschafer@tamu.edu)
2024-08-13 13:28:35

*Thread Reply:* I am interested in submitting an invited session proposal for the next JSM focusing on the thrusts 2 and 3. The JSM 2025 theme is "Statistics, Data Science, and AI Encriching Society". We could do a paper session or panel. Here is more information: https://ww2.amstat.org/meetings/jsm/2025/invitedsessions.cfm#invited

ww2.amstat.org
👀 Nathan Fox, Sara Beery, Emilio Luz-Ricca
Rowan Converse (rowanconverse@unm.edu)
2024-08-14 13:47:03

*Thread Reply:* Would also like to be kept in the loop!

Matt Morrissette (matt.morrissette@wildlifeprotectionsolutions.org)
2024-08-09 13:56:34

👋 Hi everyone!

👋 Elizabeth Campolongo, Nino Migineishvili, Sowbaranika, Sara Beery, Stephanie O'Donnell, Dan Morris, Ekaterina Nepovinnykh, Mitchell Rogers, Shawn Johnson, Don Cosseboom, Millie Chapman, Robin Zbinden, Céline Angonin, Jennifer, Alexander Merdian-Tarko, Talia Speaker
David Rolnick (dsrolnick@gmail.com)
2024-08-12 22:56:43

Climate Change AI is excited to announce the 2024 edition of our Innovation Grants program!

Our 2024 Innovation Grants program will fund year-long projects at the intersection of climate change and machine learning, offering up to USD 150K per project, with a total of up to USD 1.4M available. We are grateful for the support of Quadrature Climate Foundation, Google DeepMind, and Global Methane Hub, and for fiscal support from the Canada Hub of Future Earth.

For the Main Track, example subject areas include, but are not limited to: • Machine learning (ML) to aid mitigation approaches in sectors such as agriculture, buildings and cities, heavy industry, power and energy systems, transportation, and forestry. • ML applied to societal adaptation to climate change, including disaster prediction, management, and relief. • ML for climate and Earth science, ecosystems, and natural systems. • ML for R&D of low-carbon technologies such as electrofuels and carbon capture. • ML approaches in behavioral and social science related to climate change, including climate finance, economics, justice, and policy. • Projects addressing AI governance in the context of climate change or assessing the greenhouse gas emissions impacts of AI. In addition to the Main Track, this year’s program also features two Special Tracks:

• Special Track on Methane: Focusing on methane-related climate change mitigation in the short/medium term. • Special Track on Dataset Gaps: Emphasizing the creation of datasets or simulators, with potential support from a Google DeepMind researcher. The submission deadline is September 15, 2024. More information, including eligibility criteria, is available at https://www.climatechange.ai/calls/innovation_grants_2024 We will also hold informational webinars on July 30, 2024, at 9am ET/1pm UTC (register) and August 15, 2024, at 12pm ET/4pm UTC (register). Recordings will be available following the live webinars.

💙 Patrick Beukema, Subhransu Maji, Oisin Mac Aodha, Stephanie O'Donnell, Gustavo Perez, David Russell, Evan Eskew
👀 Stephanie O'Donnell, Carly Batist, Sara Beery, Talia Speaker
💚 Alan Stenhouse
Dan Stowell (dan.stowell@naturalis.nl)
2024-08-14 05:58:22

Hi folks. What's a good URL/citation for "COCO camera trap format" please? Web search today gives me a link to a defunct branch of megadetector. Is it maybe just here?

Sara Beery (sbeery@caltech.edu)
2024-08-14 08:56:02

*Thread Reply:* We introduced it in the Recognition in Terra Incognita paper, in the appendix, so you could cite that I guess?

Sara Beery (sbeery@caltech.edu)
2024-08-14 08:57:07

*Thread Reply:* But @Dan Morris looks like we should update the links on LILA for the format

👍 Dan Stowell
Dan Morris (agentmorris@gmail.com)
2024-08-14 10:09:45

*Thread Reply:* I have a specific section that I link to for this:

https://github.com/agentmorris/MegaDetector/tree/main/megadetector/data_management#coco-camera-traps-format

Sara correctly points out that a few format links on LILA have become stale. Good catch, I will fix that in a few days, and also probably replace with a shortlink.

👍 Dan Stowell, Sara Beery, Elizabeth Campolongo
🎉 Jon Van Oast
Dan Morris (agentmorris@gmail.com)
2024-08-27 20:42:08

*Thread Reply:* I've fixed this error in all the places it occurred on LILA, and you can use the following shortlink now to refer to the CCT format specification:

https://lila.science/coco-camera-traps

Thanks again @Dan Stowell for catching this.

GitHub
Timm Haucke (timm@haucke.xyz)
2024-08-14 15:10:50

Probably everyone here is painfully aware of Slack's limitation of message history to the past 90 days. Luckily, we've now found a way to make an archive of older messages available here: http://beerylab.csail.mit.edu/AIforConservationArchive/ (props to slack-export-viewer!https://github.com/hfaran/slack-export-viewer))|). This archive should include all messages from public channels only, back until the inception of AI for Conservation. Please let me know if there is anything missing or something not supposed to be in there. Since exporting the messages from Slack, converting and uploading them is a manual process, we can't guarantee updating the archive super regularly. I hope this still helps people find useful links / papers / resources that they have been unable to dig up!

😎 Jon Van Oast, Shir Bar, Malte Pedersen, Gustavo Perez, Justin Kay, Brian Geuther, David Russell, Sara Beery, Chris Lange, Juan Sebastián Cañas, Varshani Brabaharan, Rebecca Wilks
🙌 Carly Batist, Stephanie O'Donnell, Avi Sundaresan, Justin Kay, Nathan Fox, Taiki Sakai - NOAA Affiliate, Sara Beery, Tarun, Viktor Domazetoski, Alessandra Vidal Meza, Aamir Ahmad, Elizabeth Campolongo, Enis Berk Çoban, Kalindi Fonda, Emilio Luz-Ricca, Marion Richardot, Victor Anton, Adrien Fontvielle
🙌:skin_tone_2: Cara Appel
🙌:skin_tone_3: Alan Stenhouse
Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2024-08-15 03:23:30

*Thread Reply:* Thanks! btw, Slack recently increased the 90 days to 365.

Timm Haucke (timm@haucke.xyz)
2024-08-15 08:36:34

*Thread Reply:* Interesting, I’m still seeing 90 days, but let’s hope this change is indeed rolled out!

Autumn Nguyen (ngoc54n@mtholyoke.edu)
2024-08-15 09:12:30

*Thread Reply:* thank you so much Timm!

🙌 Timm Haucke
Sara Beery (sbeery@caltech.edu)
2024-08-14 16:07:33

May be a bit late for this, but just saw the following in California:

Sara Beery (sbeery@caltech.edu)
2024-08-14 16:07:43

CALL FOR EXPERTS AI Tools for Addressing Conservation and Biodiversity

Dear Colleagues,

The z9dwhbOYsS0vEXLtBQ-kA9Z7fJX0Jekxctb-xmnAIse5vRHIRE3GdiYo7T7k1q2-buZuX9bgox3qt5PrpjuXfWiDUVXBbxz2gM9Ea8=&c=hkfyFmGjw4dmF-gBK_9DSndOH0iQfy8lu8v7C1qGXwYJNxCx-PefFA==&ch=dsremsumAM7-EcckEJSzgDOvtLX7TVpZlwWABwjsmtqFUfOhR1Ff8g==|California Council on Science and Technology (CCST)> is seeking nominations for an Expert Briefing on artificial intelligence tools for addressing conservation and biodiversity challenge—see details below. Please send nominations (including self-nominations) for experts who can give background on these topics and to potentially serve as a briefing panelist or moderator. We also invite suggestions for additional aspects to include in the scope of the briefing.

We will begin reviewing nominations by August 7 COB. Please submit nominations via the z9dwhbOYsS0vEXLtBQ-kA7adpbiZEUdRnQAzKFg9eaK1aYFEfV0aV-3JKXhTNkRpqN4bO3pRa-wM16OXqlQQZvNBaopp37hZbmfyn5EnoCSI8aRf9wORT4cPtuOJQ8OongkXL6rW5hM458uak1Rm3VanocnVCMUTYhVP8ttkeeo3ZCPpXLNq6CsA0pBJjoG8kpbGfV7X29mTAHhTUwgdPVmmba-u3YZGyhPs8Yor-IcSfBVUobkFjuLyq&c=hkfyFmGjw4dmF-gBK_9DSndOH0iQfy8lu8v7C1qGXwYJNxCx-PefFA==&ch=dsremsumAM7-EcckEJSzgDOvtLX7TVpZlwWABwjsmtqFUfOhR1Ff8g==|online form> or email Science Services Manager, John Thompson, PhD, at john.thompson@ccst.us.

Toward a Resilient California AI Tools for Addressing Conservation and Biodiversity

Biodiversity is declining globally at unprecedented rates due to a variety of cascading and compounding anthropogenic impacts including climate change, habitat destruction, pollution, and overexploitation. To address this crisis, governments around the world are committing to the 30 by 30 initiative with the goal of conserving 30% of the world’s lands and waters by 2030 to protect and restore biodiversity. High quality biodiversity data on the distribution of animals and ecosystems is vital to successfully meeting these goals but is often costly or time consuming to collect. Advances in the fields of artificial intelligence and machine learning are a promising avenue for supporting researchers and practitioners who are building these datasets.

This Expert Briefing will explore examples of novel research projects that are utilizing artificial intelligence tools within the conservation and biodiversity space to augment ongoing efforts to collect robust datasets. The discussion will explore: lessons learned and challenges of incorporating AI into conservation work, opportunities for using these tools to inform management decisions, and pathways for leveraging civic science to support the development of this valuable conservation work.

CCST seeks nominations of individuals (including self-nominations) with relevant expertise who can give background on these topics and potentially serve as a briefing panelist or moderator.

z9dwhbOYsS0vEXLtBQ-kA7adpbiZEUdRnQAzKFg9eaK1aYFEfV0aV-3JKXhTNkRpqN4bO3pRa-wM16OXqlQQZvNBaopp37hZbmfyn5EnoCSI8aRf9wORT4cPtuOJQ8OongkXL6rW5hM458uak1Rm3VanocnVCMUTYhVP8ttkeeo3ZCPpXLNq6CsA0pBJjoG8kpbGfV7X29mTAHhTUwgdPVmmba-u3YZGyhPs8Yor-IcSfBVUobkFjuLyq&c=hkfyFmGjw4dmF-gBK_9DSndOH0iQfy8lu8v7C1qGXwYJNxCx-PefFA==&ch=dsremsumAM7-EcckEJSzgDOvtLX7TVpZlwWABwjsmtqFUfOhR1Ff8g==|Submit a Nomination>

California Council on Science &amp; Technology (CCST)
Google Docs
Devis Tuia (devis.tuia@epfl.ch)
2024-08-15 11:26:35

The session in ON! Please join us

} Sara Beery (https://aiforconservation.slack.com/team/ULWGNMZCK)
❤️ Oisin Mac Aodha, Ted Schmitt, Thor Veen, Stephanie O'Donnell, Omiros Pantazis, Shir Bar, Michael Bunsen, Subhransu Maji, Shawn Johnson
🎉 Michael Bunsen
🐝 Michael Bunsen
Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2024-08-15 13:03:25

*Thread Reply:* Fantastic event!! thanks to Sara and all organizers!

❤️ Sara Beery
Asa DeHaan (adehaan@agci.org)
2024-08-17 04:53:08

*Thread Reply:* Where will I be able to find the recording?

👀 Anton Alvarez, Alan Stenhouse
Sara Beery (sbeery@caltech.edu)
2024-08-23 16:41:26

AI and Ecology Research Lab introductions now LIVE on YouTube!!!

Check it out : https://youtube.com/@AIforConservation/playlists

And if you're a PI who works in AI+Ecology and want to be included, send me a 3-min intro video and we will add it!!

YouTube
YouTube
💚 Millie Chapman, Braden Charles DeMattei, Ana Maria Quintero, Rose Wenxin Zhao, Shir Bar, Jose Ruiz-Munoz, Alessandra Vidal Meza, Elizabeth Campolongo, Shravan Ambudkar, Esther Rolf, Aarshi Jain, Alan Stenhouse, Vanesa Reyes, Risa Shinoda, Gözde Cilingir, Viktor Domazetoski, Robin Zbinden, Violet Turri, Anton Alvarez, Takumi Sato, Sepand Dyanatkar
❤️ Brian Geuther, Aarshi Jain, Robin Zbinden
Alan Stenhouse (alan.stenhouse@csiro.au)
2024-08-23 23:05:36

*Thread Reply:* That was great, thanks to all presenters and to @Sara Beery for organising! Caught everything on the Youtube channel just now.

❤️ Sara Beery
Peter Van Dijck (petervandijck@gmail.com)
2024-08-25 09:33:05

👋 Hi everyone! We are organizing a free, in-person event on AI for Climate Action during Climate Week NYC in September. If you're in town, you can RSVP now, the page is up here: https://events.work.co/aiforclimateaction Shares in channels where people might be interested are appreciated!

🙌:skin_tone_4: Chris Llorca
Christoph Praschl (christoph.praschl@fh-hagenberg.at)
2024-08-26 07:13:56

Hey guys! I just wanted to let you know that we have published the final program for the 4th International Workshop on Camera Traps, AI, and Ecology! 📅 https://camtrap2024.fh-ooe.at/program/

Almost at the same time we also received the hundredth registration🎉 However, registration is still possible so, join us from September 5-6 at the Hagenberg Campus - FH Upper Austria or online for a deep dive into cutting-edge tech and ecological research. 🌍💡

Our hybrid agenda includes: • Four incredible keynotes by top experts such as Robin Sandfort (capreolus e.U.), Cliodhna Quigley (Universität Wien | University of Vienna), Stefano Mintchev (ETH Zürich), Claudia Probst and Georg Roman Schneider (Fachhochschule Oberösterreich) • Three tutorial sessions by Danielle McKenney (SWAROVSKI OPTIK), Andreas Leitner and the Smartmulticopter team, as well as Piotr Tynecki (TRAPPER) • Four paper sessions including ten incredible submissions in the realm of camera trapping, insect monitoring, drone based wildlife monitoring as well as analysis of forests • Social Event and dinner as part of this year's Ars Electronica Festival (only available for registered on site participants) This is a fantastic opportunity for collaboration and innovation in wildlife monitoring, ecology research, and more. 🦌🐝🤖 Let’s shape the future of conservation together!

Join us and be part of the conversation!

Camera traps, AI, and Ecology 2024
📹 Marion Richardot, Timm Haucke, Valentin Gabeff
🙌 Marion Richardot, Timm Haucke, Carly Batist, Piotr Tynecki
Kalindi Fonda (kalindi.fonda@gmail.com)
2024-09-05 05:11:29

*Thread Reply:* This is happening now (online too) but in case anyone is here would love to say hi 👋 🌱

❤️ Christoph Praschl, Anton Alvarez
Dan Morris (agentmorris@gmail.com)
2024-09-09 16:07:27

*Thread Reply:* I didn't manage to call in to this event... folks who were there physically or virtually, share something neat you learned/heard/saw/said/met/detected/classified?

Christoph Praschl (christoph.praschl@fh-hagenberg.at)
2024-09-10 05:02:54

*Thread Reply:* Oh no sad to hear that @Dan Morris. I hope due to time issues and not because of technical problems 🥲 Since I had the honor to host the event, I don‘t think I should say my opinion too loud (self-praise and so on 😂 ), but I really think we had some pretty awesome presentations within the workshop 🙂 FYI: I will upload all the recorded presentations the next days, so if somebody missed the workshop, you can rewatch at least the presentations on our homepage http://camtrap2024.fh-ooe.at (unfortunately withe one exception due to a missing agreement on recordings from the company).

camtrap2024.fh-ooe.at
Dan Morris (agentmorris@gmail.com)
2024-09-10 10:15:06

*Thread Reply:* Clarifying: yes, just time zones and my own inertia, not technical issues. I'll keep an eye out for the recordings, thanks!

Christoph Praschl (christoph.praschl@fh-hagenberg.at)
2024-09-11 04:44:26

*Thread Reply:* Still sad but I am calmed down that it was not because of technical issues 😄

Little follow up, I have quickly used the time to upload some impressions of the workshop as well as the presentations and papers: https://camtrap2024.fh-ooe.at/gallery/ https://camtrap2024.fh-ooe.at/program/

The actual proceedings are currently in the making, but this will still take some time 🙂

Camera traps, AI, and Ecology 2024
Camera traps, AI, and Ecology 2024
Kalindi Fonda (kalindi.fonda@gmail.com)
2024-09-12 06:01:41

*Thread Reply:* I always say the reason I am into tech is the scalability and the leverage it provides, and I usually think about it from the point of view of how by creating tools they become accessible at scale...

There was a point made by @Robin Sandfort that really resonated, he was talking about how AudioMoth being cheap compared with highend devices, means that one can put many more out instead of one single highend one. And sure there is a bit of a loss in the quality of the recording, but one might have never gotten a recording in the first place with only one location being monitored.

Then we continued the conversation and Robin shared a story about how one area where he had installed some AudioMoths has been flooded, and that some devices survived, so he now has the "sound of a flood": the sounds of water rushing in, the frogs being closer and closer, and then the gluglu and then the whole thing in reverse.

The chance that one high end device would break in a flood is a certain amount, but the chance that all the AudioMoths would break in that flood is much lower.

So the point that was elusive before but has now been added to my "I <3 tech because scalability" is redundancy , And probably there are many ways to think about it, also in terms of knowledge, or types of projects...

🙂 Robin Sandfort
Kalindi Fonda (kalindi.fonda@gmail.com)
2024-09-12 06:07:53

*Thread Reply:* Plus the organisation was incredible. The whole conference was gentle and caring, the talks were interesting, and covered various topics.

Then at the end of the first day we had a bus for us that took us to town, to an interactive art exhibition (which provided for external stimulus to further conversations), and then a dinner under some trees, where there were lots of chats and smiles.

I'll take a look at my notes I know I wrote down a few quotes of expressions I liked, and maybe even some more on the content. 🌟

Kalindi Fonda (kalindi.fonda@gmail.com)
2024-09-12 06:10:13

*Thread Reply:* As a personal aside, I've been trying to rope my vet technician friend into a more active role in relationship to nature/conservation. She had studied forestry, and is now working at as a vet technician, but I know she has "the call of the forest". I was sending her pictures and insights, and I think something is starting to shift within her 🌱

Christoph Praschl (christoph.praschl@fh-hagenberg.at)
2024-09-12 06:22:28

*Thread Reply:* Really glad to hear that @Kalindi Fonda! It was a pleasure having you at the workshop and meeting you 🙂

🥳 Kalindi Fonda, Dan Morris
👍 Robin Sandfort
Peter van Lunteren (contact@pvanlunteren.com)
2024-08-26 10:43:57

Does anyone know what the best way is to individually recognise Orangutans from images? My gut tells me to zoom in on the face, but I can't really find any literature about that.

Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2024-08-26 11:14:30

*Thread Reply:* Light Internet search suggests faces (primarily) with some challenges around certain age classes. If there is a dataset of individuals, you might try our multispecies model, which includes faces for some species:

https://huggingface.co/conservationxlabs/miewid-msv2

huggingface.co
🙌 Peter van Lunteren
Devis Tuia (devis.tuia@epfl.ch)
2024-08-26 16:20:43

*Thread Reply:* I think @Otto Brookes has been working on face recognition for gorillas. Maybe a good person to contact!

✅ Otto Brookes
🙌 Peter van Lunteren
Anton Alvarez (aalvarez@wwf.es)
2024-08-28 08:41:08

*Thread Reply:* I got some "good" result using MegaDescriptor with bear faces, could be nice to compare Miewid and MD.

huggingface.co
🙌 Peter van Lunteren
Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2024-08-28 10:10:59

*Thread Reply:* Is the bear face dataset open?

Jason Holmberg (Wild Me) (holmbergius@gmail.com)
2024-08-28 10:12:04

*Thread Reply:* https://huggingface.co/conservationxlabs/miewid-msv2

huggingface.co
Anton Alvarez (aalvarez@wwf.es)
2024-08-28 10:55:47

*Thread Reply:* Nope, it was from the BearID proyect, I have access via the AI for Bears Fruitpunch Challenge, @Ed Miller could say you more about the permision of the dataset.

Otto Brookes (otto.brookes@bristol.ac.uk)
2024-08-29 04:55:41

*Thread Reply:* If there's only a single Orangutan in each image (like the one shown), I would just train a model on the whole image! If you have multiple apes in each image then you'll likely need to localise them (either their face or full body). Happy to discuss further if it's helpful!

🙌 Peter van Lunteren
Peter van Lunteren (contact@pvanlunteren.com)
2024-08-30 02:44:41

*Thread Reply:* Thanks for the info everybody! I'll play around with the models and if needed will get in touch with you @Otto Brookes 😁

👍 Otto Brookes
Serge Wich (sergewich@gmail.com)
2024-09-07 03:50:58

*Thread Reply:* Hi Peter, together with Dan Schofield from Oxford university a group of orangutan researchers is working on this with some large datasets from wild orangutans. It follows the work Dan did on chimpanzees.

🙌 Peter van Lunteren
Serge Wich (sergewich@gmail.com)
2024-09-07 03:51:34

*Thread Reply:* Just get in touch and I can give you more details.

Peter van Lunteren (contact@pvanlunteren.com)
2024-09-08 12:55:11

*Thread Reply:* @Serge Wich , that sounds great! I’ll discuss this with the other project members and reach out to you via email.

Vanesa Reyes (vanesa.reyes@wildlabs.net)
2024-08-26 16:38:31

Hey there! At WILDLABS we have launched a bioacoustics horizon scan, and since AI has become a fundamental part of the field, we'd love to hear from you what innovations do you predict will revolutionise bioacoustics in the next two decades? These will be considered for inclusion in the horizon scan prioritisation process, involving diverse experts from across the globe. You can submit your ideas using this Google form.

👍 Jose Ruiz-Munoz, Nanticha Ocharoenchai (Lyn)
🙌 Carly Batist, Robin Sandfort
Rita Pucci (rita.pucci85@gmail.com)
2024-08-27 05:17:19

Hey everyone, is there someone that published at "https://ietresearch.onlinelibrary.wiley.com/hub/journal/17519640/homepage/call-for-papers/si-2023-000769"? I Submitted a paper 8 months ago but I didn't receive any feedback. 😞

😢 Burooj Ghani, Robin Zbinden, Avi Sundaresan
Devis Tuia (devis.tuia@epfl.ch)
2024-08-27 05:42:18

*Thread Reply:* we have a coupl eof the editors in this Slack, maybe DM them with your inquiry 😉

Rita Pucci (rita.pucci85@gmail.com)
2024-08-27 05:42:32

*Thread Reply:* Thanks

Christoph Praschl (christoph.praschl@fh-hagenberg.at)
2024-08-29 12:10:08

*Thread Reply:* Maybe @Majid Mirmehdi or @Otto Brookes can tell you more 🙂

Majid Mirmehdi (m.mirmehdi@bristol.ac.uk)
2024-08-29 13:42:05

*Thread Reply:* Dear @Rita Pucci,

I sincerely apologise for the delays with your paper. This is highly unusual and our turnaround times are normally well below 3 months.

The Editor handling your article has received 35 “decline”s from nominated reviewers which in itself is highly unusual. There is one review available and we are trying to push another referee whose review we are waiting for. I hope we can be in touch with a decision soon.

Best wishes, Majid

Rita Pucci (rita.pucci85@gmail.com)
2024-08-29 14:15:45

*Thread Reply:* Ok thank you for your interest

Atul Ingle (ingle@uwalumni.com)
2024-08-27 14:03:51

I'm giving an informal talk to a group of grad students in biology/ecology department on adopting computer vision tools in their research (e.g. when dealing with camera trap data). I was planning to i) tell them about GUI tools like timelapse, ii) do a live demo of megadetector in a colab notebook, and, if there's time iii) give them some pointers on training custom classifiers. Any advice on what else I could include? #help_needed

Dan Morris (agentmorris@gmail.com)
2024-08-27 20:11:23

*Thread Reply:* This is a very camera-trap-specific answer, but...

If there are folks in the audience who aren't Python-savvy and may be intimidated by Colab, consider demo'ing EcoAssist, or at least highlighting that you don't need to deal with any Python to use AI.

But I'm not discouraging you from also showing stuff in Colab. FWIW if you haven't come across it, we have a pretty straightforward MegaDetector Colab notebook here.

Addax Data Science
Est. reading time
7 minutes
😁 Peter van Lunteren
👍 Judith Dekkers
Atul Ingle (ingle@uwalumni.com)
2024-08-28 02:30:48

*Thread Reply:* :gratitudethankyou: that's a great suggestion! I agree that diving right into a colab notebook might turn some folks away; so I'll start by demo'ing EcoAssist and timelapse.

And megadetector_colab.ipynb is indeed the notebook I was planning to demo 🙂

I created a barebones classifier demo using some scripts in your megadetector repo but maybe there's a megadetector_classifier.ipynb demo somewhere that I'm missing?

Dan Morris (agentmorris@gmail.com)
2024-08-28 08:15:09

*Thread Reply:* <unsolicited advice that isn't what you asked for>

Training a custom classifier is IMO a little advanced for an introductory session, i.e. it may overstate the degree to which you need to train a custom classifier (which is not typically worth it... IMO in 95% of cases you will save more time overall by using either no classifier or a classifier that's "close enough" than by doing all the work required to train a custom classifier). So instead, consider using the (now very impressive!) model zoo that's built into EcoAssist to demonstrate the growing variety of classifiers that are already out there, and the way you might use the results from a species classifier in Timelapse.

</unsolicited advice>

To your actual question, there isn't a notebook for training a classifier within the MegaDetector repo. To each their own, but for my two cents, classifier training in a notebook is a tough putt, it's such a fundamentally asynchronous and infrastructure-dependent experience. So if you want to demonstrate workflows for training classifiers, consider demo'ing one of:

MEWC (exactly what you want, but not easy to demo in a notebook): https://github.com/zaandahl/mewc

Zamba Cloud (for video rather than stills, but still a really slick workflow for no-code training of custom classifiers): https://www.zambacloud.com/

All that said, if you do end up putting together a notebook to demonstrate classifier training, please share here!

There is also this notebook (a couple years old) that fine-tunes MegaDetector to add species categories; the notebook is nice, although this is no longer the approach I would recommend:

https://www.kaggle.com/code/evmans/train-megadetector-tutorial

Also, unrelated to training, if it's a pretty broad session, consider reminding folks that they don't have to do everything on their laptops in 2024, i.e. for folks who might be up for a cloud-based solution for AI, image review, and data management, consider demo'ing, e.g., Wildlife Insights.

Atul Ingle (ingle@uwalumni.com)
2024-08-28 17:16:59

*Thread Reply:* Really appreciate all these suggestions! I wasn't planning to make a colab notebook for training a classifier, just inference with a pretrained model (e.g. your megaclassifier-efficientnet-b3 model file). I guess that's pretty straightforward.

Dan Morris (agentmorris@gmail.com)
2024-08-28 18:10:01

*Thread Reply:* Got it, all clear, ignore my unsolicited advice then. 🙂

There is not exactly a notebook for running MegaClassifier, but... there is a notebook I use for doing All The Things involved in a complete detection+classification batch, and "All The Things" includes running MegaClassifier:

https://github.com/agentmorris/MegaDetector/blob/main/notebooks/manage_local_batch.ipynb

I actually use the .py version, but it's the same as the .ipynb version (the latter is auto-generated from the former):

https://github.com/agentmorris/MegaDetector/blob/main/notebooks/manage_local_batch.py

This notebook doesn't run MegaClassifier directly from Python; rather, it writes a shell script (.bat or .sh, depending on the OS) that does the MegaClassifier stuff. See the cell called "Run MegaClassifier (actually, write out a script that runs MegaClassifier)".

Piotr Tynecki (piotr@tynecki.pl)
2024-08-28 18:27:16

*Thread Reply:* @Atul Ingle feel free to include TrapperAI species model in your edu program after animal recognition. It is easy to load and use in notebook/colab, if the list of supported spieces is covering your context.

👍 Atul Ingle
Peter van Lunteren (contact@pvanlunteren.com)
2024-08-30 02:51:49

*Thread Reply:* @Atul Ingle If you were planning on using https://github.com/zaandahl/mewc to train classifiers, note that there is an option of loading the resulting models into EcoAssist for inference.

https://github.com/PetervanLunteren/EcoAssist/blob/main/markdown/MEWC_integration.md

Stars
25
Last updated
8 days ago
👍 Atul Ingle
Victor Anton (victor@wildlife.ai)
2024-09-02 16:48:13

Wildlife.ai has a dream job opportunity🤩🤩🤩 Do you want to be the next Product & Community Manager of a thriving non-profit? Apply now! #jobs #nz #community #grassrootsconservation

SEEK
:flag_nz: Mitchell Rogers, Alan Stenhouse
Laura Madrid (lcmmadrid5@gmail.com)
2024-09-03 13:06:54

Hi everyone 💖 My name is Laura and I just recently graduated with a Bachelor's in Computer Science from the University of Toronto. I really like computer vision and I am planning to apply for a masters program related to Computer vision + (sustainability or healthcare or HCI). I have been working on Coursera's AI for good specialization which led me to prof Sara Beery's site and this slack! I want to get involved in this space because this intersection of fields seems really interesting. I would love to chat with any of you for advice and how to get involved. Hope that everyone is having a nice day :)

👋 Timm Haucke, Malte Pedersen, Atul Ingle, Chris Lange, Tor Henrik Ulsted, Ben Weinstein, Gustavo Perez, Dan Morris, Enis Berk Çoban, Sowbaranika, Jose Ruiz-Munoz, Sebastien Ouellet, Jason Holmberg (Wild Me), Andy Viet Huynh, Omiros Pantazis, Izzy Zhu, Sara Si-Moussi, Aakash Gupta, Robin Zbinden, Alexander Merdian-Tarko, Jonathan Roberts, Aarshi Jain, Quentin Geissmann, Elena Grace Sierra, Tim Zhou, Julia Chae, Benjamin Tremoulheac, Elizabeth Campolongo, Angela Zhu
👋:skin_tone_3: Jess Tam, Alan Stenhouse
Benjamin Hoffman (benjaminsshoffman@gmail.com)
2024-09-05 14:57:50

Is anyone aware of a paper that uses the precision & recall (or other metric) of a detection model measured on a test set, in order to inform how error bars are estimated in downstream analyses? I am assuming this would be some hierarchical Bayesian thing, along the lines of what’s done in https://zslpublications.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/rse2.171, equation (4), but modified to include what’s known about model error rates?

Ben Weinstein (benweinstein2010@gmail.com)
2024-09-05 15:02:14

*Thread Reply:* We play in this area in attached. Its far from the kinda of hierarchical model that we want, but starts to gain some insight. This is kind of thing we were discussing @Toryn Schafer , @Ben Augustine, trying to couple non-calibrated scores with downstream confidence intervals. @Eric Orenstein has a paper on plankton that orbits this space as well. TLDR, nothing that i'm aware does precisely what you are looking for. Also summoning both @Casey Youngflesh and @Heather Lynch, since both are close colleagues and are in this channel, and you pointed at their paper.

👍 Casey Youngflesh
Benjamin Hoffman (benjaminsshoffman@gmail.com)
2024-09-05 15:08:47

*Thread Reply:* fantastic, thank you!

Toryn Schafer (tschafer@tamu.edu)
2024-09-05 15:13:12

*Thread Reply:* I've used multiple imputation in the past to propagate a first stage prediction uncertainty to a second stage inferential model: https://doi.org/10.1007/s13253-020-00399-y

Dan Morris (agentmorris@gmail.com)
2024-09-05 15:13:46

*Thread Reply:* This paper is not about a detection model, but otherwise it gets at what you're asking about (the relationship between ML accuracy and downstream metrics):

Whytock RC, Świeżewski J, Zwerts JA, Bara-Słupski T, Koumba Pambo AF, Rogala M, Bahaa-el-din L, Boekee K, Brittain S, Cardoso AW, Henschel P. Robust ecological analysis of camera trap data labelled by a machine learning model. Methods in Ecology and Evolution. 2021 Jun;12(6):1080-92.

Benjamin Hoffman (benjaminsshoffman@gmail.com)
2024-09-05 15:16:57

*Thread Reply:* thanks, these are all really great references!

Casey Youngflesh (caseyyoungflesh@gmail.com)
2024-09-05 16:18:32

*Thread Reply:* I don't have any paper suggestions but here's my (extended) take. I'd probably add another level the traditional mark-recapture framework. You've got your observation model, your process model, and you could add the 'classification model'. I could see adding something to eq. 4, though I think the classification process is different from the observation process. Also, I think this matters for the creation of that vector z. Curious what folks think about the below, where t is time step:

y = ML classified observed state (given as data) m = latent true observed state (estimated) z = latent true occupancy state (estimated) p = probability you observe, given occupancy (estimated) psi = probability occupancy (estimated) h = probability that classification is correct (given as data) If yt = 0, ht = True positives / True positives + False negatives If yt = 1, ht = True negatives / True negatives + False positives

yt ~ Bern(mt * h_t) m_t ~ Bern(z_t * p) z_t ~ Bern(psi)

The one issue here is that you need that vector of z's (latent true occupancy over the the series) which you get from the entire time series of true observed states (filling 1s between first and last observation, assuming a period of closure). That's typically fed to the model before hand (as data). And you don't know the entire time series of true observed states (m's) until your model has run through the time series -- typically your data are your true observed states. Maybe an approach similar to that taken by @Toryn Schafer above could work there for the z vector?

Eric Price (eric.price@ifr.uni-stuttgart.de)
2024-09-05 17:55:55

*Thread Reply:* We did something slightly tangential. We used a bayesian characterisation of a detector, but we weren't so much interested in precision/recall but more in the location accuracy. Assume you have a true positive from a multibox detector, most papers only look into minimum jaccard overlap to count it as a true positive (0.5, 0.95, ...) or maybe a weighted distribution to compare models, but when you want to use the localization downstream, you need the localization error distribution of the detector. We did that in https://doi.org/10.1109/LRA.2018.2850224 - the result is with close approximation a normal distribution - when measured over the evaluation set across different scale ranges:

Eric Price (eric.price@ifr.uni-stuttgart.de)
2024-09-05 17:56:30

*Thread Reply:* the false positives fall into the tail edge of the distribution

Eric Price (eric.price@ifr.uni-stuttgart.de)
2024-09-05 17:57:26

*Thread Reply:* that way we can use the projected position with a bayesian tracker and have a good estimate of the "sensor noise" when the DNN is the sensor

Matt Weldy (matthewjweldy@gmail.com)
2024-09-05 18:22:38

*Thread Reply:* @Casey Youngflesh that is roughly the likelihood of a multiscale occupancy model where the availability component is recast.

Different context, but @Tessa Rhinehart did this with bioacoustic predictions [https://besjournals.onlinelibrary.wiley.com/doi/10.1111/2041-210X.13905] where continuous scores of reviewed files were modeled as a normal 2-part mixture of the logits.

👍 Casey Youngflesh
Ben Weinstein (benweinstein2010@gmail.com)
2024-09-05 19:34:56

*Thread Reply:* I smell a review piece here if any graduate students are lurking. @Casey Youngflesh my problem is thinking about those latent states when you have more than just presence/absence. Ignoring for a moment that confidence scores are non-normal and only look like probabilities (already a fatal flaw), and that not all classes are confused equally (also fatal), the multinomial probability isn't just whether the classification is correct (h in your psuedo code), because it also adds to another class. So for example in our tree work, if we want to count all the trees in a location and the classification model incorrect predicts it to be an oak tree, but its a maple, it both effects the undercount of oaks, which is covered in your model, but also the overcounts of the maples. You can easily image any mcmc process just endlessly trading off when trying to estimate that true latent state (m), no identifiability, especially as you get to more than 2 or 3 species.

👍 Casey Youngflesh
Matt Weldy (matthewjweldy@gmail.com)
2024-09-05 19:37:51

*Thread Reply:* Important to distinguish sigmoid or soft max type heads, where that tradeoff is different

👍 Ben Weinstein
Benjamin Hoffman (benjaminsshoffman@gmail.com)
2024-09-06 00:14:33

*Thread Reply:* thanks for all this, definitely a lot of good papers I’ll have to go read. This type of question has certainly haunted me before, and I would be interested in a review paper if one ever appeared @Ben Weinstein 🙂

Casey Youngflesh (caseyyoungflesh@gmail.com)
2024-09-06 15:44:22

*Thread Reply:* @Ben Weinstein Totally agreed about confidence vs. probability of correctness. But if you did have a proper probability of class in a vector [p sp1, p sp2, p sp3], that should account for the fact that species aren’t confused equally, right? Like, if the model knows they aren’t confused equally that should be baked into the probabilities (I see how it might not know though). I also agree that numerous classes are bound to screw things up. But I’m not sure I follow why you’d necessarily expect to have a non-identifiability problem.

Maybe I don’t understand the problem, but for trees in a given patch, you have a vector of probs for each tree. You draw a value for z (latent true species identity of a given tree i) from this vector of probabilities for tree i (let’s just assume these are actual probabilities, one for each species possibility). Maybe you do this 10k times. So for each tree i, you have 10k realizations of true species ID. Then at each of the 10k iterations, you sum the number of species of each tree. If you’re just interested in counts, you have 10k realizations of counts for each species. This is just being backed out of the (assumed actual) probs, not with a Bayes model.

If you wanted to model the composition of trees as a function of stuff, you could use a Bayes model with a Dirichlet, yeah? With each of the proportions having an error model (you would need a sum to zero constraint though)?

👍 Ben Weinstein
Brian Geuther (brian.geuther@jax.org)
2024-09-06 16:39:05

*Thread Reply:* Don't know if this is particularly helpful (most of this discussion is above my head), but the statistician in my group has used this in the past (conformal prediction for estimating confidence intervals): https://arxiv.org/pdf/2107.07511 At least in my understanding, if you get CIs on the model prediction (rather than probabilities), you should be able to propagate that uncertainty forward into the downstream analyses.

Also appears first author is in this slack channel.

Toryn Schafer (tschafer@tamu.edu)
2024-09-08 12:37:44

*Thread Reply:* Just catching up with the discussion! For the model sketch from @Casey Youngflesh, I tried having a friend go down that route for a multi-stage model with classification errors. The model software wasn't happy with the specification so we ended up just defining each probability assuming independence of the classification and observation processes, we had: P(y = s| latent = s) = P(detected and classified as s| latent = s ) = P(detected | latent = s)P(classified as s)

Toryn Schafer (tschafer@tamu.edu)
2024-09-08 12:38:47

*Thread Reply:* This is probably also prone to some of challenges due to the simplifying assumptions pointed out above

Keiller Nogueira (keillernogueira@gmail.com)
2024-09-06 06:17:20

2nd Data-Centric Land Cover Classification Challenge

part of the Workshop on Machine Vision for Earth Observation and Environment Monitoring (MVEO) in conjunction with the British Machine Vision Conference (BMVC) 2024

Glasgow, Scotland, UK

https://mveo.github.io/challenge.html https://www.kaggle.com/competitions/data-centric-land-cover-classification-challenge/overview


Following the success of the previous challenge held in 2023, the Data-Centric Land Cover Classification Challenge is back. This time, participants will have to develop an AI-based ranking system that can rank the samples based on their levels of label noise.

To this end, a semantic segmentation dataset composed of 5,000 256x256 images and their corresponding (noisy) labels will be provided.

Success is measured by comparing the submitted ranking with an undisclosed one and calculating the Kendall Tau score.


The final results of this challenge will be presented during the Workshop. The authors of the top-ranked methods will be invited to present their approaches at the Workshop in Glasgow/UK, on 28 November 2024. These authors will also be invited to co-author a journal paper which will summarize the outcome of this challenge.


IMPORTANT DATES

Challenge Deadline: Sunday, 10 November 2024 Workshop: Thursday, 28 November 2024


ORGANIZERS

Keiller Nogueira, University of Liverpool, UK Ronny Hänsch, German Aerospace Center (DLR), Germany June Moh Goo, University College London (UCL), UK Zichao Zeng, University College London (UCL), UK Pallavi Jain, Inria, France Zhipeng Liu, University of Exeter, UK

mveo.github.io
kaggle.com
👀 Valerie, Elizabeth Campolongo
Céline Angonin (C.Angonin@tilburguniversity.edu)
2024-09-09 11:01:45

[Bioacoustics dataset list] Hello!

During the last months, I worked with gathering publicly available bioacoustics datasets into one list. I noticed that such a resource didn't exist, so I thought that it would be useful for the community. You can consult it here (https://bioacoustic-ai.github.io/bioacoustics-datasets/), and this is the link of the corresponding github repository (https://github.com/bioacoustic-ai/bioacoustics-datasets). I am and will be continuously adding information and new datasets to this list.

You're very welcome to contribute! You can either add new datasets, or open github issues with a request or a suggestion. If you prefer, you can also DM me on Slack. 😁 Please consider starring the github repo to help people discover this resource! ⭐

🙌 Carly Batist, Sara Beery, Burooj Ghani, Brian Geuther, Toryn Schafer, Ilyass Moummad, Rupa Kurinchi-Vendhan, Shir Bar, Subhransu Maji, Kishore Panaganti, Vincent Lostanlen, Ben Williams, charlotte, Aarshi Jain, Juan Sebastián Cañas, Sergei Nozdrenkov, Elizabeth Fawcett, Meredith Palmer
❤️ Inês Nolasco, Aakash Gupta, Vanesa Reyes, Sara Beery, Brian Geuther, Arthur Caillau, Maddie Cusimano, Laura Madrid, Chris Lange, Vincent Lostanlen, Lukas Picek, Aarshi Jain, Georgia Atkinson, Juan Sebastián Cañas, Carl Boettiger, Talia Speaker, Elizabeth Campolongo
😎 Jon Van Oast, Vincent Lostanlen, Aarshi Jain, Juan Sebastián Cañas
⭐ Maddie Cusimano, Vincent Lostanlen, Aarshi Jain, Juan Sebastián Cañas, Alexander Merdian-Tarko
🙌:skin_tone_3: Alan Stenhouse
Carly Batist (cbatist@gradcenter.cuny.edu)
2024-09-09 11:04:16

*Thread Reply:* Awesome!!

Carly Batist (cbatist@gradcenter.cuny.edu)
2024-09-09 11:04:23

*Thread Reply:* Had you seen this list before? It has a bunch more as well https://bioacousticsdatasets.weebly.com/

bioacousticsdatasets.weebly.com
Carly Batist (cbatist@gradcenter.cuny.edu)
2024-09-09 11:06:02

*Thread Reply:* It would be good to specify what you mean by ‘dataset’ too - species-specific calls, ML training datasets, annotated or un-annotated, etc.? Like are you wanting Xeno-Canto, Macaulay library, etc. there too?

Céline Angonin (C.Angonin@tilburguniversity.edu)
2024-09-09 11:12:25

*Thread Reply:* Yes, I did see it! I included their datasets in my list. 😊

> Like are you wanting Xeno-Canto, Macaulay library, etc. there too? Good point, I should make it clearer in the repo. I'm not considering databases or repositories like Xeno-Canto whose content changes over time, I am considering ML-ready datasets. Both labeled and unlabeled data are fine. I'll update the README tomorrow!

👍 Carly Batist
🙌 Carly Batist
Carly Batist (cbatist@gradcenter.cuny.edu)
2024-09-09 11:14:59

*Thread Reply:* In that case, I would also recommend cross-checking your list with LILA BC too!

👍 Céline Angonin
Carly Batist (cbatist@gradcenter.cuny.edu)
2024-09-09 11:15:01

*Thread Reply:* there are a number of ML-ready annotated datasets there as well - https://lila.science/otherdatasets#bioacoustics

LILA BC
Written by
lilawp
Est. reading time
38 minutes
Carly Batist (cbatist@gradcenter.cuny.edu)
2024-09-09 11:16:09

*Thread Reply:* Might also be a good idea to post about this list on WILDLABS too!

wildlabs.net
👍 Céline Angonin
💯 Vanesa Reyes
Sara Beery (sbeery@caltech.edu)
2024-09-09 11:22:43

*Thread Reply:* Awesome!!!!

☺️ Vincent Lostanlen
Céline Angonin (C.Angonin@tilburguniversity.edu)
2024-09-09 11:33:47

*Thread Reply:* > In that case, I would also recommend cross-checking your list with LILA BC too! Yup, already done that! I used their list to add some datasets 😊

Holger Klinck (hk829@cornell.edu)
2024-09-09 11:35:21

*Thread Reply:* There are a bunch more datasets out there: https://zenodo.org/search?q=Klinck&f=resource_type%3Adataset&l=list&p=1&s=10&sort=bestmatch Scroll through the list...

Céline Angonin (C.Angonin@tilburguniversity.edu)
2024-09-09 11:37:45

*Thread Reply:* I will go through it, thanks!

Sam Lapp (sam.lapp@pitt.edu)
2024-09-10 16:41:04

*Thread Reply:* this is great, I’ve added it to the Bioacoustics resource page

here’s one more list by @Tessa Rhinehart to check https://docs.google.com/spreadsheets/d/1KrmCB0vvSK7V3znJfycO-eOMZJKP2F-Ih6neRYPz1Xc/edit?gid=0#gid=0

🙌 Carly Batist, Céline Angonin
Céline Angonin (C.Angonin@tilburguniversity.edu)
2024-09-11 03:00:36

*Thread Reply:* Thank you very much! I have already gone through Tessa's list when I curated the datasets list. I think I will mention in the README all the resources I have gone through to compile my list. 🤔

👍 Carly Batist, Elizabeth Campolongo
Ben Williams (ben.williams.20@ucl.ac.uk)
2024-09-11 06:57:53

*Thread Reply:* Just submitted a pull request for our coral reef dataset 🪸

Céline Angonin (C.Angonin@tilburguniversity.edu)
2024-09-11 07:04:05

*Thread Reply:* Thanks! 😍 By the way, if you found the procedure to add a dataset or anything else unclear, don't hesitate to tell me!

Lukas Picek (lukaspicek@gmail.com)
2024-09-11 08:01:01

*Thread Reply:* Great work! Just a small comment, it is hard to read if you have a “dark-mode” ON in your browser.

Céline Angonin (C.Angonin@tilburguniversity.edu)
2024-09-11 08:03:52

*Thread Reply:* That's not hard, that's impossible 😂 I didn't think about that, thank you for raising the issue, I'll look at mitigating it!

Dan Morris (agentmorris@gmail.com)
2024-09-11 17:40:40

*Thread Reply:* Mmmm, lists of datasets are like my favorite thing (just behind baby animals, guitars, and football). I added a link to your page from the list of acoustic datasets on LILA's "other datasets" page (the same one linked to earlier on this thread):

https://lila.science/otherdatasets#bioacoustics

🙌 Carly Batist, Lukas Picek, Sara Beery
❤️ Céline Angonin, Tessa Rhinehart, Elizabeth Campolongo
Céline Angonin (C.Angonin@tilburguniversity.edu)
2024-09-12 03:58:45

*Thread Reply:* Thank you! 🤩

Hunter P (hunter@hunterpitelka.com)
2024-09-11 23:38:17

I'm hiring a Software Engineer!

Skylight is an AI-powered platform designed to combat illegal fishing by detecting vessels in satellite imagery and providing actionable insights to governments and organizations. As part of the team, you’ll work on cutting-edge technology that helps protect our oceans and supports global sustainability efforts. We're a team of 5 software engineers that partner with some of the leading AI researchers to apply AI to real world conservation problems and deliver actionable products to real users. Today Skylight is used in about 80 countries around the world! Check out the job posting here and feel free to DM me if you have any questions or referrals!

🐟 Sara Beery, Alexander Merdian-Tarko, Alan Stenhouse
👍 Tim Gardner, Angela Zhu
Atriya Sen (atriya@atriyasen.com)
2024-09-16 19:07:33

Hiring a fully-funded Graduate Research Assistant at Oklahoma State University

I am an Assistant Professor in the Computer Science department at Oklahoma State University, looking to hire a PhD or MS student Graduate Research Assistant (full tuition waiver and standard stipend) on an NSF-funded project aiming to apply AI techniques to conservation biology applications, with a focus on intelligently resolving taxonomic uncertainty.

Here is the award: https://www.nsf.gov/awardsearch/showAward?AWD_ID=2426835

I would love to hire someone who has a background in biology (and ideally an interest in systematics and/or conservation biology), but who is also interested in working with cutting-edge AI techniques and has some experience with programming and perhaps basic AI.

The prospective GRA should email me directly at atriya.sen@okstate.edu, with a CV and short statement of interest. The student would ideally start in Spring 2025, but Fall 2025 applicants will be considered.

😎 Jason Holmberg (Wild Me), Andy Viet Huynh
🎉 Hammed Akande
Atriya Sen (atriya@atriyasen.com)
2024-09-19 15:04:34

*Thread Reply:* I am also open to hiring a post-doc for this position in lieu of a graduate student.

🙌 Hammed Akande
Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2024-09-17 05:46:32

Hi all, If a PhD student here with computer vision background is looking for a 3-month internship in a project related to CV and Robotics for Conservation, and is based in the EU (preferably) please DM me asap. I will be happy to recruit someone from here before posting the ad widely. For more info on the project in which the internship will be embedded, please check: https://www.aamirahmad.de/projects/wildcap/

flightrobotics
🎉 Jon Van Oast, Sara Beery, Alex Rood, Nora Gourmelon, Marion Richardot
Dan Stowell (dan.stowell@naturalis.nl)
2024-09-18 06:45:18

Datasets for bioacoustics! So many of them: https://bioacoustic-ai.github.io/bioacoustics-datasets/ As part of the Bioacoustic AI project, @Céline Angonin has produced a new searchable listing of audio data sources for animal sounds. Check it out and find a new dataset (or contribute your own).

🐦 Nicolas Arrieta Larraza, Sara Beery, Shir Bar, Elizabeth Campolongo, Carl Boettiger
👏 Rita Pucci, Alex Rood, Talia Speaker, Enis Berk Çoban, Taku OP Iwaki
👍 Carly Batist
❤️ Jon Van Oast, Enis Berk Çoban, Morgan Ziegenhorn (she/they)
👏:skin_tone_5: Prabath Gunawardane
🎯 John Martinsson
Céline Angonin (C.Angonin@tilburguniversity.edu)
2024-09-18 06:53:35

*Thread Reply:* Thanks Dan, I've already advertised it above 😁 (https://aiforconservation.slack.com/archives/CLWGQ4BJ6/p1725894105104259)

} Céline Angonin (https://aiforconservation.slack.com/team/U06T7HE1HMG)
🫢 Dan Stowell
Dan Stowell (dan.stowell@naturalis.nl)
2024-09-18 07:02:07

*Thread Reply:* Great! (and sorry for double-posting) Thanks everyone for your comments on Celine's post

Alex Rood (alex.rood@wildlabs.net)
2024-09-18 09:12:55

*Thread Reply:* Related event on bioacoustics data analysis and AI happening tomorrow! https://wildlabs.net/event/wildlabs-virtual-meetup-bioacoustics-data-analysis-and-ai

wildlabs.net
🙌 Nicolas Arrieta Larraza, Céline Angonin, Vanesa Reyes, Talia Speaker, Sara Beery, Yseult Hb, Anton Alvarez, Kit Lewers
Carly Batist (cbatist@gradcenter.cuny.edu)
2024-09-20 18:22:32

Is anyone going to be at COP16 in Calí next month?! I would love to see some other AI for Conservation folks there 🙂

Justin Kay (justinkay92@gmail.com)
2024-09-20 18:26:21

*Thread Reply:* @Sara Beery @Neha Hulkund and I will be there for part of it 🙂

🙌 Carly Batist, Neha Hulkund
Carly Batist (cbatist@gradcenter.cuny.edu)
2024-09-20 18:31:04

*Thread Reply:* Awesome! Which days will you be there? Are you doing any talks/panels?

We’re (WildMon) part of conservation tech event on the 24th - https://www.cbd.int/side-events/6023

Sara Beery (sbeery@caltech.edu)
2024-09-20 18:31:35

*Thread Reply:* 26-30th

👍 Carly Batist, Jon Van Oast
Sara Beery (sbeery@caltech.edu)
2024-09-20 18:32:52

*Thread Reply:* I won't make that one in person unfortunately, but will follow up with a few other events we're taking part in!

🙌 Carly Batist
Carly Batist (cbatist@gradcenter.cuny.edu)
2024-09-20 18:33:42

*Thread Reply:* That would be awesome thanks! Trying to get a list of nature/conservation-tech-focused events together 🙂

👍 Sara Beery
❤️ Jon Van Oast
😎 Jon Van Oast
Aamir Ahmad (aamir.ahmad@ifr.uni-stuttgart.de)
2024-09-23 10:41:05

*Thread Reply:* @Carly Batist is the event 6023 going to be broadcasted or can people join remotely?

Carly Batist (cbatist@gradcenter.cuny.edu)
2024-09-23 10:44:56

*Thread Reply:* I believe it’s in-person only unfortunately

👍 Aamir Ahmad
Jon Van Oast (jon@wildme.org)
2024-09-23 14:43:49

*Thread Reply:* interested in any list of conservation tech events compiled and/or people going to be there. i will not personally be there, but likely/hopefully some of my colleagues at CXL will be. gladly pass along info to them.

👍 Carly Batist
🌱 Kalindi Fonda
Sebastien Ouellet (sebouel@gmail.com)
2024-10-22 14:00:51

*Thread Reply:* This isn't related to AI (yet) but I'd be curious to see how solutions from other frameworks (like from the event on the 24th) could be standardized so this Pact could benefit: https://www.linkedin.com/posts/berlin-urban-nature-pact_berlinurbannaturepact-cop16-cities-activity-7254401158918586368-jEpm/

Jennifer Turliuk (jenn.turliuk@gmail.com)
2024-09-26 08:51:14

🌍 Ready to Tackle Climate Change? Join the MIT Energy & Climate Hackathon! 🚀

TLDR: Interested in Renewable Energy, Transportation, Future Mobility, Circularity, Climate Change, and tackling real-world problems - all while getting free swag, food, and the chance to win up to $1,000 per person? Apply by October 18, 2024, and participate from November 15-17, 2024, to help create a cleaner future.

Past sponsors include Google, McKinsey Sustainability, Crusoe, Schneider Electric, Fifth Wall, and more.

Here’s what we’re diving into: 🌿 Renewable Energy: Envision a world powered entirely by wind, solar, and green hydrogen—clean energy driving everything we do! 🚀 Future Mobility: Picture electric vehicles, high-speed trains, and even flying taxis making your commute fast, fun, and emissions-free! ♻️ Circularity: Let’s rethink how we use and reuse materials to minimize waste and build a sustainable future.

Join a team of up to 4, collaborate with global competitors, and solve real-world energy challenges proposed by our sponsors. Plus, network with top industry leaders and win up to $1,000 per person!

To participate, fill out our form: https://tinyurl.com/mitec-hack-2024 To learn more about the MITEC Hackathon, check out our website: https://www.mitenergyhack.org/

Please reach out if you have any questions! And email me at jturliuk@mit.edu if you’re interested in your company being a challenge sponsor/partner.

Google Docs
🌱 Kalindi Fonda, Jennifer, Alan Stenhouse
Jennifer (jzhuge@alumni.cmu.edu)
2024-10-09 16:28:28

*Thread Reply:* Hey! are recent working grads allowed to participate?

Aditya Jain (aditya.jain@mila.quebec)
2024-09-27 18:42:59

Is there any CV4E meetup happening at ECCV?

👀 Julia Chae, Robin Zbinden, Mia Chiquier, Malte Pedersen, Andrew Temple
Sara Beery (sbeery@caltech.edu)
2024-09-29 10:26:36

*Thread Reply:* Organize something!!

Devis Tuia (devis.tuia@epfl.ch)
2024-09-30 11:41:46

*Thread Reply:* tonight I am taken, but if you want to do smth tomorrow or wednesday, let me know

Sara Beery (sbeery@caltech.edu)
2024-09-30 11:44:13

*Thread Reply:* Maybe we could meet up at the reception tomorrow?

👍 Devis Tuia
Aditya Jain (aditya.jain@mila.quebec)
2024-09-30 11:55:30

*Thread Reply:* Sounds good!

Devis Tuia (devis.tuia@epfl.ch)
2024-09-30 11:57:08

*Thread Reply:* Where could we meet? It will be busy…

Devis Tuia (devis.tuia@epfl.ch)
2024-10-01 11:47:44

*Thread Reply:* What about 1830 at the Google booth?

❤️ Sara Beery
Katja Ovchinnikova (e.ovchinnikova@gmail.com)
2024-09-30 14:52:50

Hi everyone! I’m Katja, a data scientist with extensive experience in academia, specializing in AI/ML. Witnessing the urgency of the climate crisis, I made a decision to transition to the climate solutions industry and completed climate-focused courses with Terra.do and Airminers. As part of this transition, I’ve been working as a data science consultant as well as a hands-on freelancer for early-stage climate and environmental startups. This experience has been truly rewarding and has only deepened my desire for working on climate solutions.

I’m especially passionate about biodiversity, particularly regarding the ocean, but also beyond. My past and current projects include collaborations with fisheries and conservation organizations on tasks such as automatic scallop recognition, manta ray and whale identification, coral species recognition, and identifying species in sonar data.

Here’s how I can contribute as a data scientist: • Consult on developing strategies for data-driven products • Advise on data analysis workflows and infrastructure • Prototype and evaluate AI/ML models • Design experiments for data collection and analysis • Write scientific publications and blog posts • Lead workshops on data science in the climate space, as well as broader climate science and solutions topics If you’re looking for a data science consultant, freelancer, or contractor for your project, I’d love to chat and explore how we can work together! I’m working remotely, self-employed in France. For more details, you can check out my website: www.ovchinnikova.me.

👋 Dan Morris, Sara Beery, Timm Haucke, Nicholas, Howard L Frederick, Alexander Merdian-Tarko, Lingchao Mao
🌱 Kalindi Fonda
👋:skin_tone_3: Alan Stenhouse
Alex Filazzola (alex.filazzola@outlook.com)
2024-10-02 12:14:12

🦋 Hi All! 👋

We are hosting a workshop on building reproducible workflows for species distribution models (SDMs) using SyncroSim. SyncroSim is a free software for non-commercial use (e.g., academics, non-profits, government). It provides an easy-to-use interface that works with open-source packages, whether through a graphical user interface, command line, R, or Python. SyncroSim simplifies the creation of workflows for SDMs, including models like MaxEnt and random forest. In this workshop, participants will learn the basics of running species distribution models and how to create reproducible workflows for scientific modeling, from start to finish. This workshop is part of the annual Society for Open, Reliable, and Transparent Ecology and Evolutionary biology (SORTEE) conference. Attendance is free for members of SORTEE and membership costs $10-$40 depending on career stage. There are also free waivers for members part of other open science organizations.

📅 Date: Tuesday, October 15, 2024 📍 Location: Online 🕰️ Time: 12:30 pm PT; 3:30 pm ET ✅ Sign up here: https://events.humanitix.com/sortee-conference-2024

SyncroSim | Deliver geospatial forecasting models direct to decision makers
sortee.org
events.humanitix.com
Location
Online
Date
Tuesday October 15th 2024
🤩 Sara Beery, Aakash Gupta, Robin Zbinden, Alan Stenhouse
Luke Meyers (lmeyers@SEATTLEU.EDU)
2024-10-02 20:02:17

Anyone work with (or know about) high performance computing facilities within the Smithsonian? If so would love to get in touch!

Dan Morris (agentmorris@gmail.com)
2024-10-06 10:15:48

New dataset on LILA, containing ~119k images from downward-facing small animal cameras (see the adorable chipmunk at the end of this post):

https://lila.science/datasets/ohio-small-animals/

Thanks to Greg Lipps and Sowbaranika Balasubramaniam from Ohio State for this dataset.

Hopefully this is the first of several datasets from downward-facing cameras, which are not covered in existing public datasets and are not handled well by existing models, including MD.

🎉 Timm Haucke, Sara Beery, Aude Vuilli, Justin Kay, Atul Ingle, Murilo Gustineli, Sankalpa Ghose, Shir Bar, Shawn Johnson, Risa Shinoda, Ștefan Istrate, Marion Richardot, Malte Pedersen, Viktor Domazetoski, Jenna Kline, Jon Van Oast, Roberta Hunt, Tiziana Gelmi Candusso, Sowbaranika, Eric Cunningham, Jennifer, Alan Stenhouse
😲 Mitchell Rogers, Marion Richardot, Arky
🐿️ Chris Lange, Jess Tam, Toryn Schafer, Alexander Merdian-Tarko, Elizabeth Campolongo
🐭 Jenna Kline
Tom August (tomaug@ceh.ac.uk)
2024-10-08 04:27:24

I'm looking for people attending COP16 in Cali, Colombia who would be interested in joining a panel on "What is the role of AI in addressing the biodiversity crisis?". The panel will be hosted by the British embassy, and take place for one hour between 22 October from 14:00 to 18:00. If you think you could contribute please drop me a message

🙌 Anton Alvarez, Justin Kay, Carly Batist, Jon Van Oast
👍 Carly Batist, Jon Van Oast
🙌:skin_tone_3: Alan Stenhouse
Drea Burbank (drea@savimbo.com)
2024-10-08 16:15:52

Hey guys, we need some data scientists to fix the 80% stat. https://news.mongabay.com/2024/09/do-indigenous-peoples-really-conserve-80-of-the-worlds-biodiversity. We’re getting together a scientific workgroup for this that will meet weekly for three months following COP and try do crank out a paper as a coalition the way the did. Sign up here. Meetings run .

Mongabay Environmental News
Written by
Latoya Abulu
Est. reading time
14 minutes
Airtable
🙌 Carly Batist, Alessandra Vidal Meza, Sara Beery, Murilo Gustineli, Thijs van der Plas, Devi Ayyagari
❤️ Jon Van Oast
🙌:skin_tone_5: Prabath Gunawardane
Jon Van Oast (jon@wildme.org)
2024-10-08 17:23:29

*Thread Reply:* is there a shareable/public version of this scientific workgroup call? i would like to share it with folks not on slack. thanks!

Pietro Perona (perona@caltech.edu)
2024-10-10 02:19:04

Has anyone read the 2024 WWF report "Living Planet"? Do people understand why the uncertainty in their measurements is increasing with time instead of going down? What's going wrong?

Aakash Gupta (aakash@thinkevolveconsulting.com)
2024-10-10 04:41:24

*Thread Reply:* When making predictions about future trends, especially in ecology, the confidence in those projections diminishes because of the unpredictability of various environmental factors (climate change, human impacts, conservation efforts). Thus, future uncertainty grows as a function of both complexity and incomplete data.

🙏 Pietro Perona
Pietro Perona (perona@caltech.edu)
2024-10-10 05:18:05

*Thread Reply:* Thank you @Aakash Gupta. The top plot does not extrapolate though. The y axis is on a log scale and thus, if the confidence interval is constant in time, the confidence interval will appear to become larger as the index declines. Still, I am surprised that the confidence interval does not shrink in time given all the ongoing work to obtain accurate measurements.

👀 Clemens Mosig
Holger Klinck (hk829@cornell.edu)
2024-10-10 06:35:23

*Thread Reply:* This may also provide some insights: https://communities.springernature.com/posts/the-living-planet-index-is-not-a-reliable-measure-of-population-changes

Research Communities by Springer Nature
👀 Steve Haddock
🙏 Pietro Perona
Teodoro Topa (teodorotopa@gmail.com)
2024-10-10 09:38:39

*Thread Reply:* @Pietro Perona I think it’s because it is indexed to 1970 levels. The error bar size seems like a function of time rather than quality of biodiversity estimates. Since nobody has exact numbers on how many animals there were in 1970 or how many there are today, as we get further from 1970 and more time passes, we can be less and less sure of exactly how far we have fallen the starting point. Basically, in 1971, very little time had passed, so regardless of the quality of biodiversity estimates, the confidence range was small because there just was not enough time between 1970 and 1971 for estimations to get so far away from reflecting what was likely the reality.

🙏 Pietro Perona
👍 Omiros Pantazis
Heather Lynch (heather.lynch@stonybrook.edu)
2024-10-31 11:22:27

*Thread Reply:* We address some of these issues in our recent-ish paper:

Ben Williams (ben.williams.20@ucl.ac.uk)
2024-10-11 11:55:01

Hi all, we're submitting a funding bid and want to put down a sensible ballpark figure for some software engineering work. Does anyone have an idea, or know a good place to look, as to how much it would cost to package up an existing ML model and processing pipeline into a nice user friendly interface with a few small bells and whistles?

In more detail, we would be passing through audio data, running an ensemble classifier on this, then having humans reviews the outputs usually just to select the true positives, then applying these labels to all the audio data. Some summary outputs, e.g number labelled number of matches etc would be nice. We would also likely want to be able to update the model checkpoint with new versions every now and then (trained outside the software) to handle model drift. It would be for in house use or use by close collaborators so doesn't need to look all fancy, it would also need to able to run locally on windows machines.

Nate Harada (nharada1@gmail.com)
2024-10-11 12:23:54

*Thread Reply:* Hey Ben! I can share a few couple questions I’d prob ask if estimating to help nail it down more:

• How complex is the ensemble? Do you need GPUs on the windows machines? What kind of software has to be on the machines to make it work (i.e. CUDA, etc) • Is the existing model easily compatible with windows? What format is the model in? • What format and language is the supporting code for the model? How much of it is there? • How technical are the people using the software? Does the update process need to be pretty foolproof or can people download and modify the software themselves?

Josh Veitch-Michaelis (j.veitchmichaelis@gmail.com)
2024-10-11 17:51:56

*Thread Reply:* I don't have a $$$ number, because this probably depends on a lot of factors like where you're located and what you're willing to pay (e.g. are you proposing that a postdoc/grad student does this, or would you contract it out?) Generally when I've submitted academic grants, we've based the amount on length of appointment x salary for position, so a year of postdoc time in the UK might be ~80-100k GBP when you include overheads and other lining of the university's pockets.

Conservatively I'd say it's at least a few months' work. For someone with experience in all the components, it's probably possible to get an MVP out in a matter of weeks. For example if you didn't need a standalone executable, you could use a ready-to-go UI like streamlit/gradio and run it straight from Python/Docker. What will take longer is things like end-user/technical documentation, getting installers working reliably (does it have to be a standalone thing?), etc.

You could find a consultant who could bang this out in much less time, but you'll probably pay for that expertise (think several hundred a day at least).

Realistically, you probably want to have someone around long enough to maintain and update things for at least 6 months while you test it.

🙏 Ben Williams
Ben Williams (ben.williams.20@ucl.ac.uk)
2024-10-14 04:56:13

*Thread Reply:* Thanks both! Creation of the model is part of the grant so I can only give guesses as to answers, but here's my best intuition currently: • We are aiming to make it lightweight so it can run with no GPU and some patience • It will likely be written in standard libraries like TF, PyTorch. We'll be designing to make it compatible with windows from the off, as this is what are users in the field have. • Assume not technical at all! In my limited experience of this, I'd imagine something like we upload a new checkpoint file/s somewhere and they just replace the old one with this. Let me know if that helps narrow down a range @Nate Harada 🙏

Super useful on the postdoc salary point, fully agree getting something stable is going to be where a lot of the hard yards are

Josh Veitch-Michaelis (j.veitchmichaelis@gmail.com)
2024-10-14 14:19:51

*Thread Reply:* I think that's sensible. Many models can be run fine on CPU if latency isn't an issue. I would suggest decoupling your training library from the "production" model. You can distribute your model in a standard open format like ONNX (e.g. onnx-runtime). You could also consider writing a native application, for example using C++/Qt. That tends to be a lot cleaner than trying to package up a mess of Python dependencies and it will likely run faster. The downside is that finding developers with that kind of experience (in academia), tends to be harder.

If you're also expecting the hire to train the model as well, then budget for at least a year (do you have datasets, etc ready to go?)

Universities tend to have baseline costs for hiring and they're often (much) higher than you expect - e.g. a postdoc in London might get paid 40k, but the University will charge double or more. We've been burned by this in the past (e.g. the HR overhead was so high that we had to scale back the project), so I would highly recommend speaking to someone in HR/grants and getting estimates for staff at different levels of seniority.

Ben Williams (ben.williams.20@ucl.ac.uk)
2024-10-18 12:51:47

*Thread Reply:* Thanks @Josh Veitch-Michaelis! Will certainly be separating the training and deployment aspects, the model will already be developed and we would just be asking the hire to package it up into some kind of desktop app type thing. Good to know on ONNX and C++ etc, thats super helpful. Yeah my PI is experienced in all the HR stuff for grants so we're aware we basically need to double the salary cost for the overheads, we just needed a ballpark figure on this. So we're now budgeting for a 6month post doc and can use these funds to either look for one or for an external contractor, thanks so much for all the tips

👍 Josh Veitch-Michaelis
Josh Veitch-Michaelis (j.veitchmichaelis@gmail.com)
2024-10-18 13:46:10

*Thread Reply:* Great, best of luck. Feel free to get in touch if you have any other questions!

Jon Van Oast (jon@wildme.org)
2024-10-11 17:50:07
Brittany Aguilar (baguilar@schmidtsciences.org)
2024-10-12 16:19:06

Hi all, A friend of mine works for WWF and they shared an upcoming AI x Conservation virtual event/conversation they are hosting with some of their experts that you may be interested in attending. It is a virtual event. I think you just need to email Christina for an invite...

Your passion for technology and conservation is inspiring, and because of that I’m excited to invite you to a special virtual presentation showcasing how WWF is deploying AI across our conservation efforts, led by WWF experts Abby Hehmeyer and Lu Gao. We’re gathering a few like-minds to dive into the specifics of two AI-driven projects that are revolutionizing wildlife monitoring and combating wildlife trafficking, all to protect endangered species and preserve biodiversity.

Your expertise would be particularly useful to our team, especially as we discuss WWF’s broader technology vision.

Date: Tuesday, October 15, 2024 Time: 11:00 AM PST / 1:00 PM CST / 2:00 PM EST Location: ZOOM (RSVP for details)

Please RSVP for more details by simply responding to this email or by calling/texting me at . Abby, Lu, myself, and others are hoping to see you on the call and share our collective enthusiasm for the use of technology in conservation!

Contact: Christina Armstrong, CFRE christina.armstrong@wwfus.org

Sara Beery (sbeery@caltech.edu)
2024-10-14 11:44:46

Flagging this in case people hadn't seen it!

https://www.planet.com/pulse/planets-project-centinela-monitoring-vulnerable-biodiversity-hotspots-for-conservation-action/

planet.com
🙌 Anna Ding
Jes Lefcourt (jeslefcourt@gmail.com)
2024-10-14 18:03:35

*Thread Reply:* Sara, are you connected to the team at Planet working on this?

Sara Beery (sbeery@caltech.edu)
2024-10-14 18:13:51

*Thread Reply:* Not directly on this project! But I flagged because it seemed like it might provide opportunities for biodiversity projects to access data.

Jes Lefcourt (jeslefcourt@gmail.com)
2024-10-14 18:42:28

*Thread Reply:* Absolutely! It's great!

Riley Knoedler (mknoedler@west-inc.com)
2024-10-16 15:55:00

I posted this on WILDLABs but I'm cross-posting here: it occurred to me and my colleagues recently that the self-driving vehicle industry must be working on their own animal classification models, because it matters if you are about to collide with a turkey or a moose or a pet dog. Is anyone in this community involved in that sphere? Does anyone know of opportunities to access these data or collaborate on model development?

🤔 Elizabeth Campolongo
Elizabeth Campolongo (e.campolongo479@gmail.com)
2024-10-16 16:04:15

*Thread Reply:* That's a good point. Though, I'd expect them to be more focused on size considerations. Curious if anyone has any insight and would be interested in the data too!

Riley Knoedler (mknoedler@west-inc.com)
2024-10-16 16:26:04

*Thread Reply:* Well, I wonder if there's any appetite for being able to distinguish between sensitive and non-sensitive species as well, at least from PR perspective

Elizabeth Campolongo (e.campolongo479@gmail.com)
2024-10-16 16:45:46

*Thread Reply:* I would certainly hope so 😭 or maybe they don’t want to hit pets or a skunk 🤷‍♀️

Levi Cai (lcai@whoi.edu)
2024-10-16 17:27:46

*Thread Reply:* Seems vaguely like (a) yes they do make their own detectors/classifiers for animals but (b) probably extraordinarily painful to collaborate/get access to anything of the kind for an external user of any kind.

Wenxin Yang (wenxinyang@ucsb.edu)
2024-10-16 18:23:36

*Thread Reply:* I’m not working in this direction but remember Volvo was trying to develop a system to detect animals and the system was confused by kangaroos (news article).

the Guardian
michele volpi (michele.volpi@sdsc.ethz.ch)
2024-10-17 10:43:33

*Thread Reply:* I have a friend who worked in the industry, for lidar data they mostly consider large animals (which crash the car), cats and dogs (potential lawsuits) and birds (flying in front of sensors could lead to problems). Some animal classes were considered but just for evaluation, not for training. Accessing data depends on the company but is often a no-go

👍 Chris Lange, Elizabeth Campolongo
Riley Knoedler (mknoedler@west-inc.com)
2024-10-17 13:11:04

*Thread Reply:* Thanks for all the replies, very helpful insights!

Tom August (tomaug@ceh.ac.uk)
2024-10-17 11:33:44

For anyone attending COP16 you can now register for our event, which includes a panel "What is the Role of AI in Addressing the Biodiversity Crisis?", and free drinks 🍸

Eventbrite
Where
4-84 Avenida 4 Oeste, Cali, Valle del Cauca 760045, Colombia
When
Tue 22 Oct 2024 at 14:00
👏 Morgan Ziegenhorn (she/they)
🙌 Thijs van der Plas, Sara Beery, Arky, Ted Schmitt, Carly Batist
♥️ Morgan Langley
Tom August (tomaug@ceh.ac.uk)
2024-10-17 11:35:04

I should add that while the whole event it tagged as "The UK's Role" - the panel will be globally focussed!

Arky (hitmanarky@gmail.com)
2024-10-18 01:34:59

Cerulean platform for monitoring oil spills is now in public beta.

https://cerulean.skytruth.org/

🙌 Jonathan Roberts, Sonny Burniston, Anton Alvarez
Filippo Varini (fppvrn@gmail.com)
2024-10-19 09:00:47

Hello everyone, has anyone tried out META’s SAM2 for ecological application? I am exploring its potential for segmenting and tracking objects in underwater videos and would love to discuss pros and cons of the approach!

Specifically, I find the following challenges:

  1. Can SAM2, track automatically videos, without prompt, once fine-tuned? a. it seems like you can automatically generate masks but it is not clear whether you can automatically track as well.
  2. Can SAM2 detect and track new objects appearing in the video?
  3. Does it perform well with many objects (i.e. a school of fish)? Would love to discuss the potential of this new tech with the community 🙂
👍 Sonny Burniston, Mitchell Rogers, Nathan Fox, Roberta Hunt
👍:skin_tone_3: Alan Stenhouse
Filippo Varini (fppvrn@gmail.com)
2024-10-19 09:06:52

*Thread Reply:* Would love to share code and insights. So far I have been playing the following notebooks: • video predictorautomaticmaskgeneratorultralytics implementation

Brian Geuther (brian.geuther@jax.org)
2024-10-21 15:14:03

*Thread Reply:* I haven't tried SAM2 yet, but I did try out TAM, which was based on SAM1 (https://arxiv.org/abs/2304.11968) A couple notes I had from testing out TAM: • We could only fit ~600 frames at once (a restriction that may exist with SAM2) • SAM supervision works fairly well when things that look identical are separated, but fail by merging them together when they're touching • IR/monochrome imaging isn't well represented in their foundational training data and as such have basic issues for things like shadow influencing borders • Despite those drawbacks, we could pretty reliably get okay tracking with 1 click per object for about 10-15s (of the 20s clip)

Filippo Varini (fppvrn@gmail.com)
2024-10-22 00:57:51

*Thread Reply:* Thanks Brian, that sounds interesting!

Georgia Atkinson (g.atkinson@ncl.ac.uk)
2024-10-22 05:46:34

*Thread Reply:* Hi Filippo, I'm also looking into applying SAM2 for underwater applications! I've been trying to get around having to do point/bounding box prompts so I've been having a look at Grounded SAM2 to do text prompting instead which seems to be giving ok results thus far (I only started playing around with it last week). I'm a bit unclear on whether after some fine tuning I'd be able to automatically generate masks for the specific classes I'm interested in or not however.

Filippo Varini (fppvrn@gmail.com)
2024-10-22 08:19:35

*Thread Reply:* Thanks Georgia! I will have a look at Grounding Dino and let’s keep in contact about updates!

Aakash Gupta (aakash@thinkevolveconsulting.com)
2024-10-29 00:09:26

New Dataset: A comprehensive annotated image dataset for real-time fish detection in pond settings

The dataset contains 10,607 annotated fish instances across 586 images. The dataset features several computer vision challenges, including occlusion, turbid water conditions with turbidity levels ranging from 40 to 80 NTU, high fish density per frame, and varying lighting conditions with illumination values between 100 and 500 lux. These factors significantly impact image quality and detection performance, making the dataset ideal for developing robust computer vision models capable of handling complex real-world scenarios.

https://www.sciencedirect.com/science/article/pii/S2352340924009697

🙌 Elizabeth Campolongo, Jason Holmberg (Wild Me), Braden Charles DeMattei, Morgan Ziegenhorn (she/they), Carly Batist, Murilo Gustineli
🐟 Jason Holmberg (Wild Me), Dan Morris, Matthias Zuerl, Leonie Hodel, Shir Bar, Murilo Gustineli
🐠 Jason Holmberg (Wild Me), Urs, Murilo Gustineli
Dan Stowell (dan.stowell@naturalis.nl)
2024-10-30 08:46:49

PhD to recruit! A collaboration between Naturalis and Tilburg University on fossil images, in a project called "HAICu" - Please circulate this short link to anyone who might be interested https://tiu.nu/22517

👍 Oisin Mac Aodha, Sara Beery, Murilo Gustineli, Shir Bar, Jason Holmberg (Wild Me)
Elke Windschitl (Elke.Windschitl@islandconservation.org)
2024-10-30 16:53:39

Hi all, I am working on a project with Island Conservation where we have georeferenced drone imagery of burrow holes. There are usually many burrow holes per image (dozens). Previously these holes have been counted by hand with ImageJ, but I am exploring alternative solutions with machine learning. I would love to connect with someone who might have experience with this or a similar application. Please comment or message me if you would be willing to discuss. Thanks so much!

❤️ Sara Beery, David Russell, Timm Haucke, Andy Viet Huynh, Seven
Ben Weinstein (benweinstein2010@gmail.com)
2024-10-30 16:54:40

*Thread Reply:* can you drop in a screenshot?

Elke Windschitl (Elke.Windschitl@islandconservation.org)
2024-10-30 16:57:32

*Thread Reply:* Here’s an original image plus zoomed in on one section of burrows

Ben Weinstein (benweinstein2010@gmail.com)
2024-10-30 16:58:11

*Thread Reply:* and you want to locate them, like with a box or point on the entry.

Ben Weinstein (benweinstein2010@gmail.com)
2024-10-30 17:00:48

*Thread Reply:* okay, so depending on how exactly image J was run, you may be able to recover the position of previous annotations over the image. If not, in like an hour on http://labelme.csail.mit.edu/Release3.0/, you probably could label 25 images, and run then through an object detection tool like ours. https://deepforest.readthedocs.io/en/latest/user_guide/11_training.html you can make an discussion on the github repo where you get stuck.

David Russell (davidrussell327@gmail.com)
2024-10-30 17:04:14

*Thread Reply:* Once you have the manual annotations and/or ML predictions, do you have to worry about double counting burrows across multiple overlapping images? If so, some of the work we've done such as this tool might become useful. You could use this as an additional post-processing step after following the steps Ben described to generate ML predictions on each image.

Website
<https://open-forest-observatory.github.io/geograypher/>
Stars
11
❤️ Timm Haucke
Elke Windschitl (Elke.Windschitl@islandconservation.org)
2024-10-31 11:27:33

*Thread Reply:* Thank you so much I will check out these tools! Correct in that we don’t want to double count holes. We were thinking of breaking apart the stitched together orthomosaic.

👍 David Russell
David Russell (davidrussell327@gmail.com)
2024-10-31 11:33:30

*Thread Reply:* Cool, this workflow also relies on running structure from motion as a first step, you just generate predictions on the raw images (rather than ortho) and then try to suppress duplicate detections geometrically. The motivation is you may have higher quality detections on the raw images than the orthos, since they don't have stitching artifacts and you get multiple perspectives.

Elke Windschitl (Elke.Windschitl@islandconservation.org)
2024-10-31 12:00:11

*Thread Reply:* Oh that is very cool

Riley Knoedler (mknoedler@west-inc.com)
2024-11-01 12:54:18

*Thread Reply:* Hi Elke, I've been working on a similar project to detect prairie dog burrows from drone imagery. We trained a model in YOLOv8, I think we are using a similar approach to what David has referenced to merge detections together from different overlapping images. Happy to discuss more with you!

❤️ Elke Windschitl
Lasha Otarashvili (otarashvililasha@gmail.com)
2024-10-31 11:17:32

New model is out for wild animal re-id, covering 64 species from mixed data sources.

The is Wild Me's strongest generation of re-id model at use on Wildbook platforms, covering cetaceans, mammals, carnivores and more. The model shows state-of-the-art generalization ability for search queries on Wildbook platforms. For this release, the model is stripped down and made easy to run and experiment with.

Model weights and List of species included Ready-to-use scripts

Hopefuly this helps not only ones that have or have not worked with MiewID previously. The knowledge transfer lowers the barrier and opens up the possibility to adapt deep learning models to datasets previously too small to train.

huggingface.co
😎 Jason Holmberg (Wild Me), Timm Haucke, Jon Van Oast, Matthias Zuerl, Dante Wasmuht, Sara Beery
🎉 Jason Holmberg (Wild Me), Lukas Picek, Vojta Čermák, Justin Kay, Timm Haucke, Rohan Sawahn, Meredith Palmer, Avi Sundaresan, Nicolas Arrieta Larraza, Dante Wasmuht, Christoph Praschl, Carly Batist, Jennifer, Alexander Merdian-Tarko, Anton Alvarez
👍 Kostas Papafitsoros, Timm Haucke, Dante Wasmuht, Piotr Tynecki, Seven
❤️ Sara Beery
Lasha Otarashvili (otarashvililasha@gmail.com)
2024-10-31 11:27:13

*Thread Reply:*

Lasha Otarashvili (otarashvililasha@gmail.com)
2024-10-31 11:27:46

*Thread Reply:* From-scratch vs leveraging model weights to adapt to new datasets.

Lasha Otarashvili (otarashvililasha@gmail.com)
2024-10-31 11:28:50

*Thread Reply:* A paper with detailed zero-shot and fine-tuning studies will be available next week. Model also transfers well to 'similar' species in a 0-shot manner, for example, the dataset includes lots of dorsal fin samples, so the model just generally does well at them, regardless of species. Moreover, the 64 species can easily be fine-tuned for a new domains to make use of knowledge transfer for a general re-id task. The positive effect varies on per-species basis from limited to dramatic. The knowledge transfer could especially be handy for those working with just small amounts of labelled data.

Matthias Zuerl (matthias.zuerl@fau.de)
2024-11-04 17:28:10

*Thread Reply:* Awesome work! Can you please also post the mentioned publication here as well? I am very interested in reading it!

Ilyass Moummad (mr.ilyassmoummad@gmail.com)
2024-11-04 17:53:30

Check out our robust and efficient feature extractor (Pytorch) of bird sounds, with only ~20M parameters, trained on the Xeno-Canto dataset, covering over 10,000 bird species, ideal for bioacoustic projects. Explore it here: https://huggingface.co/ilyassmoummad/ProtoCLR

huggingface.co
👍 Masato Hagiwara, Meredith Palmer, Jose Ruiz-Munoz, Murilo Gustineli, Thor Veen, Juan Sebastián Cañas, Burooj Ghani, Don Cosseboom
😎 Jason Holmberg (Wild Me), Don Cosseboom, Clément Sage
🐦 Shir Bar, Juan Sebastián Cañas, Don Cosseboom, Nicolas Arrieta Larraza, Aude Vuilli
❤️ Sara Beery
Ilyass Moummad (mr.ilyassmoummad@gmail.com)
2024-11-05 04:23:22

*Thread Reply:* Classification accuracy of ProtoCLR compared to various models on one-shot and five-shot bird sound classification tasks

Aria Ma (aria@luneaera.com)
2024-11-05 13:51:57

Hello all, my team is putting together focus groups for a conservation community platform to empower a network of knowledge sharing. Please let me know if you're working in a conservation nonprofit and would like to join! Happy to send more information via DMs (-:

Brittany Aguilar (baguilar@schmidtsciences.org)
2024-11-05 14:29:32

*Thread Reply:* am I wrong in thinking that's what this slack channel is for? or https://wildlabs.net/ ? I am a funder who is involved with various conservation efforts and I try to support pre-existing communities rather than create new ones... Could you give a little more information on your goals for this group?

Aria Ma (aria@luneaera.com)
2024-11-05 15:12:27

*Thread Reply:* I totally understand where you're coming from! This is a group that is all facets of conservation, not just AI or technology. This means natural solutions from ocean conservation to mangrove restoration. The goal is to connect people from all facets of conservation - not just AI nor technology.

Eugene Galaxy (jga@reallyarobot.com)
2024-11-06 04:39:29

Hey everyone! 👋

I am Eugene and together with my two teammates Hugo & Elina we are building a software tool called Animal Detect to process (filter, label, sort, classify, output) wildlife data as quickly as possible, not necessarily relying on existing AI classifiers. The idea is to make it simple and accessible to people working with wildlife of all technical backgrounds, and do it FAST.

Little about me... My background is in MSc. in Robotics Engineering with focus on Computer vision and AI from Aalborg University, Denmark. I find my resort and peace being alone with nature. I love animals, got PADI diver certification, did climbing for a while. Not surprising I found myself in the intersection of wildlife and technology. 😃

We are looking for feedback, input, and testers for Animal Detect. The first "testable" version is coming in 2 months and we would love to hear your opinions on it: positive, negative, improvements, opinions, anything that comes to mind. I will drop a couple of links here.

https://www.animaldetect.com - Our homepage with waitlist. If you want to test the platform - join it! 🙂

My X profile - https://x.com/eugene_galaxy, LinkedIn - https://www.linkedin.com/in/eugenegalaxy/ Demo video on Youtube (very early showcase of embeddings technology) - https://www.youtube.com/watch?v=mkT1_jpmewg

DM me if you are interested or know someone who is. Thank you all and have a nice day! :)

Animal Detect
❤️ Nicolas Arrieta Larraza, Sara Beery, Malte Pedersen, Jennifer, Anton Alvarez
👀 Helena Russello, Gaspard Dussert, Alexander Merdian-Tarko
Jennifer (jzhuge@alumni.cmu.edu)
2024-11-06 15:34:28

*Thread Reply:* @Velizar

Jennifer (jzhuge@alumni.cmu.edu)
2024-11-06 15:40:05

*Thread Reply:* Hey cool tool! When I go to the link for a website i get 'There's been a glitch' because it takes me to slack.com/yourwebsite, your actual website works!

Eugene Galaxy (jga@reallyarobot.com)
2024-11-07 01:20:57

*Thread Reply:* Oh you are right, thank you for the notice! 💡

It was an extra "/" in the link, whoops. 😇

Jodi Rowley (jodi.rowley@austmus.gov.au)
2024-11-07 22:52:41

Hi everyone! I have a #job opportunity at the Australian Museum in Sydney, Australia. I’m looking for someone to support scientific research using machine learning to identify frog species in audio from the FrogID project (frogid.net.au). The role is full time for one year. You have until 25 Nov to apply. Unfortunately at this stage, you must be an Australian citizen, resident or have current visa that allows them to work in Australia. Sorry! https://iworkfor.nsw.gov.au/job/scientific-officer-495522

🐸 Brittany Aguilar, Jennifer, Benjamin Hoffman